00:00:00.001 Started by upstream project "autotest-per-patch" build number 122894 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.059 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.161 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.161 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.975 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.985 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.996 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.996 > git config core.sparsecheckout # timeout=10 00:00:05.005 > git read-tree -mu HEAD # timeout=10 00:00:05.021 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.039 Commit message: "inventory/dev: add missing long names" 00:00:05.039 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.145 [Pipeline] Start of Pipeline 00:00:05.157 [Pipeline] library 00:00:05.159 Loading library shm_lib@master 00:00:05.159 Library shm_lib@master is cached. Copying from home. 00:00:05.172 [Pipeline] node 00:00:05.181 Running on WFP43 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.182 [Pipeline] { 00:00:05.194 [Pipeline] catchError 00:00:05.195 [Pipeline] { 00:00:05.209 [Pipeline] wrap 00:00:05.219 [Pipeline] { 00:00:05.226 [Pipeline] stage 00:00:05.227 [Pipeline] { (Prologue) 00:00:05.444 [Pipeline] sh 00:00:05.727 + logger -p user.info -t JENKINS-CI 00:00:05.744 [Pipeline] echo 00:00:05.745 Node: WFP43 00:00:05.754 [Pipeline] sh 00:00:06.055 [Pipeline] setCustomBuildProperty 00:00:06.063 [Pipeline] echo 00:00:06.064 Cleanup processes 00:00:06.071 [Pipeline] sh 00:00:06.354 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.354 3372257 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.404 [Pipeline] sh 00:00:06.689 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.689 ++ grep -v 'sudo pgrep' 00:00:06.689 ++ awk '{print $1}' 00:00:06.689 + sudo kill -9 00:00:06.689 + true 00:00:06.705 [Pipeline] cleanWs 00:00:06.714 [WS-CLEANUP] Deleting project workspace... 00:00:06.714 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.720 [WS-CLEANUP] done 00:00:06.724 [Pipeline] setCustomBuildProperty 00:00:06.736 [Pipeline] sh 00:00:07.015 + sudo git config --global --replace-all safe.directory '*' 00:00:07.070 [Pipeline] nodesByLabel 00:00:07.071 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.078 [Pipeline] httpRequest 00:00:07.082 HttpMethod: GET 00:00:07.083 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.088 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.091 Response Code: HTTP/1.1 200 OK 00:00:07.092 Success: Status code 200 is in the accepted range: 200,404 00:00:07.092 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:09.809 [Pipeline] sh 00:00:10.092 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:10.112 [Pipeline] httpRequest 00:00:10.117 HttpMethod: GET 00:00:10.118 URL: http://10.211.164.101/packages/spdk_01137ce67dba93005212d0a6e244aa2a6f4c88ef.tar.gz 00:00:10.118 Sending request to url: http://10.211.164.101/packages/spdk_01137ce67dba93005212d0a6e244aa2a6f4c88ef.tar.gz 00:00:10.140 Response Code: HTTP/1.1 200 OK 00:00:10.141 Success: Status code 200 is in the accepted range: 200,404 00:00:10.141 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_01137ce67dba93005212d0a6e244aa2a6f4c88ef.tar.gz 00:00:58.021 [Pipeline] sh 00:00:58.305 + tar --no-same-owner -xf spdk_01137ce67dba93005212d0a6e244aa2a6f4c88ef.tar.gz 00:01:00.855 [Pipeline] sh 00:01:01.141 + git -C spdk log --oneline -n5 00:01:01.141 01137ce67 lib/nvme: delete PCIe I/O qpair asynchronously 00:01:01.141 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:01:01.141 4506c0c36 test/common: Enable inherit_errexit 00:01:01.141 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:01:01.141 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:01.154 [Pipeline] } 00:01:01.169 [Pipeline] // stage 00:01:01.177 [Pipeline] stage 00:01:01.179 [Pipeline] { (Prepare) 00:01:01.195 [Pipeline] writeFile 00:01:01.212 [Pipeline] sh 00:01:01.495 + logger -p user.info -t JENKINS-CI 00:01:01.508 [Pipeline] sh 00:01:01.789 + logger -p user.info -t JENKINS-CI 00:01:01.800 [Pipeline] sh 00:01:02.083 + cat autorun-spdk.conf 00:01:02.083 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.083 SPDK_TEST_NVMF=1 00:01:02.083 SPDK_TEST_NVME_CLI=1 00:01:02.083 SPDK_TEST_NVMF_NICS=mlx5 00:01:02.083 SPDK_RUN_UBSAN=1 00:01:02.083 NET_TYPE=phy 00:01:02.090 RUN_NIGHTLY=0 00:01:02.094 [Pipeline] readFile 00:01:02.118 [Pipeline] withEnv 00:01:02.119 [Pipeline] { 00:01:02.134 [Pipeline] sh 00:01:02.420 + set -ex 00:01:02.420 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:02.420 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:02.420 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.420 ++ SPDK_TEST_NVMF=1 00:01:02.420 ++ SPDK_TEST_NVME_CLI=1 00:01:02.420 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:02.420 ++ SPDK_RUN_UBSAN=1 00:01:02.420 ++ NET_TYPE=phy 00:01:02.420 ++ RUN_NIGHTLY=0 00:01:02.420 + case $SPDK_TEST_NVMF_NICS in 00:01:02.420 + DRIVERS=mlx5_ib 00:01:02.420 + [[ -n mlx5_ib ]] 00:01:02.420 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.420 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.994 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.994 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.994 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.994 + true 00:01:08.994 + for D in $DRIVERS 00:01:08.994 + sudo modprobe mlx5_ib 00:01:08.994 + exit 0 00:01:09.004 [Pipeline] } 00:01:09.020 [Pipeline] // withEnv 00:01:09.025 [Pipeline] } 00:01:09.042 [Pipeline] // stage 00:01:09.051 [Pipeline] catchError 00:01:09.053 [Pipeline] { 00:01:09.068 [Pipeline] timeout 00:01:09.068 Timeout set to expire in 40 min 00:01:09.069 [Pipeline] { 00:01:09.085 [Pipeline] stage 00:01:09.087 [Pipeline] { (Tests) 00:01:09.102 [Pipeline] sh 00:01:09.386 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:09.386 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:09.386 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:09.386 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:09.386 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:09.386 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:09.386 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:09.386 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:09.386 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:09.386 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:09.386 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:09.386 + source /etc/os-release 00:01:09.386 ++ NAME='Fedora Linux' 00:01:09.386 ++ VERSION='38 (Cloud Edition)' 00:01:09.386 ++ ID=fedora 00:01:09.386 ++ VERSION_ID=38 00:01:09.386 ++ VERSION_CODENAME= 00:01:09.386 ++ PLATFORM_ID=platform:f38 00:01:09.386 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.386 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.386 ++ LOGO=fedora-logo-icon 00:01:09.386 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.386 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.386 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.386 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.386 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.386 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.386 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.386 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.386 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.386 ++ SUPPORT_END=2024-05-14 00:01:09.386 ++ VARIANT='Cloud Edition' 00:01:09.386 ++ VARIANT_ID=cloud 00:01:09.386 + uname -a 00:01:09.386 Linux spdk-wfp-43 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.386 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:12.679 Hugepages 00:01:12.679 node hugesize free / total 00:01:12.679 node0 1048576kB 0 / 0 00:01:12.679 node0 2048kB 0 / 0 00:01:12.679 node1 1048576kB 0 / 0 00:01:12.679 node1 2048kB 0 / 0 00:01:12.679 00:01:12.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.679 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:12.679 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:12.679 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:12.679 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:12.679 + rm -f /tmp/spdk-ld-path 00:01:12.679 + source autorun-spdk.conf 00:01:12.679 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.679 ++ SPDK_TEST_NVMF=1 00:01:12.679 ++ SPDK_TEST_NVME_CLI=1 00:01:12.679 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:12.679 ++ SPDK_RUN_UBSAN=1 00:01:12.679 ++ NET_TYPE=phy 00:01:12.679 ++ RUN_NIGHTLY=0 00:01:12.679 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.679 + [[ -n '' ]] 00:01:12.679 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:12.679 + for M in /var/spdk/build-*-manifest.txt 00:01:12.679 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.679 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:12.679 + for M in /var/spdk/build-*-manifest.txt 00:01:12.679 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.679 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:12.679 ++ uname 00:01:12.679 + [[ Linux == \L\i\n\u\x ]] 00:01:12.679 + sudo dmesg -T 00:01:12.679 + sudo dmesg --clear 00:01:12.679 + dmesg_pid=3373121 00:01:12.679 + sudo dmesg -Tw 00:01:12.679 + [[ Fedora Linux == FreeBSD ]] 00:01:12.679 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.679 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.679 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.679 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.679 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.679 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.679 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.679 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.679 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.679 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.679 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.679 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.679 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.679 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.679 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.679 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.679 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:12.679 Test configuration: 00:01:12.679 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.679 SPDK_TEST_NVMF=1 00:01:12.679 SPDK_TEST_NVME_CLI=1 00:01:12.679 SPDK_TEST_NVMF_NICS=mlx5 00:01:12.679 SPDK_RUN_UBSAN=1 00:01:12.679 NET_TYPE=phy 00:01:12.679 RUN_NIGHTLY=0 12:44:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:12.679 12:44:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.679 12:44:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.679 12:44:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.680 12:44:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.680 12:44:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.680 12:44:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.680 12:44:50 -- paths/export.sh@5 -- $ export PATH 00:01:12.680 12:44:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.680 12:44:50 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:12.680 12:44:50 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:12.680 12:44:50 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715769890.XXXXXX 00:01:12.680 12:44:50 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715769890.Y9S396 00:01:12.680 12:44:50 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:12.680 12:44:50 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:12.680 12:44:50 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:12.680 12:44:50 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.680 12:44:50 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.680 12:44:50 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:12.680 12:44:50 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:12.680 12:44:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.680 12:44:50 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:12.680 12:44:50 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:12.680 12:44:50 -- pm/common@17 -- $ local monitor 00:01:12.680 12:44:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.680 12:44:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.680 12:44:50 -- pm/common@21 -- $ date +%s 00:01:12.680 12:44:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.680 12:44:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.680 12:44:50 -- pm/common@21 -- $ date +%s 00:01:12.680 12:44:50 -- pm/common@21 -- $ date +%s 00:01:12.680 12:44:50 -- pm/common@25 -- $ sleep 1 00:01:12.680 12:44:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715769890 00:01:12.680 12:44:50 -- pm/common@21 -- $ date +%s 00:01:12.680 12:44:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715769890 00:01:12.680 12:44:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715769890 00:01:12.680 12:44:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715769890 00:01:12.680 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715769890_collect-vmstat.pm.log 00:01:12.680 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715769890_collect-cpu-load.pm.log 00:01:12.680 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715769890_collect-cpu-temp.pm.log 00:01:12.680 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715769890_collect-bmc-pm.bmc.pm.log 00:01:13.618 12:44:51 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:13.618 12:44:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:13.618 12:44:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:13.618 12:44:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:13.618 12:44:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:13.618 Wed May 15 10:44:51 AM UTC 2024 00:01:13.618 12:44:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:13.618 v24.05-pre-660-g01137ce67 00:01:13.618 12:44:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:13.618 12:44:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:13.618 12:44:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:13.618 12:44:51 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:13.618 12:44:51 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:13.618 12:44:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.618 ************************************ 00:01:13.618 START TEST ubsan 00:01:13.618 ************************************ 00:01:13.618 12:44:51 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:13.618 using ubsan 00:01:13.618 00:01:13.618 real 0m0.000s 00:01:13.618 user 0m0.000s 00:01:13.618 sys 0m0.000s 00:01:13.618 12:44:51 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:13.618 12:44:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:13.618 ************************************ 00:01:13.618 END TEST ubsan 00:01:13.618 ************************************ 00:01:13.618 12:44:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:13.618 12:44:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.618 12:44:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.618 12:44:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:13.878 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:13.878 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:14.139 Using 'verbs' RDMA provider 00:01:27.290 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:39.598 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.598 Creating mk/config.mk...done. 00:01:39.598 Creating mk/cc.flags.mk...done. 00:01:39.598 Type 'make' to build. 00:01:39.598 12:45:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:39.598 12:45:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:39.598 12:45:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.598 12:45:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.598 ************************************ 00:01:39.598 START TEST make 00:01:39.598 ************************************ 00:01:39.598 12:45:17 make -- common/autotest_common.sh@1121 -- $ make -j72 00:01:40.166 make[1]: Nothing to be done for 'all'. 00:01:48.286 The Meson build system 00:01:48.286 Version: 1.3.1 00:01:48.286 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:48.286 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:48.286 Build type: native build 00:01:48.286 Program cat found: YES (/usr/bin/cat) 00:01:48.286 Project name: DPDK 00:01:48.286 Project version: 23.11.0 00:01:48.286 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.286 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.286 Host machine cpu family: x86_64 00:01:48.286 Host machine cpu: x86_64 00:01:48.286 Message: ## Building in Developer Mode ## 00:01:48.286 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.286 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.286 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.286 Program python3 found: YES (/usr/bin/python3) 00:01:48.286 Program cat found: YES (/usr/bin/cat) 00:01:48.286 Compiler for C supports arguments -march=native: YES 00:01:48.286 Checking for size of "void *" : 8 00:01:48.286 Checking for size of "void *" : 8 (cached) 00:01:48.286 Library m found: YES 00:01:48.286 Library numa found: YES 00:01:48.286 Has header "numaif.h" : YES 00:01:48.286 Library fdt found: NO 00:01:48.286 Library execinfo found: NO 00:01:48.286 Has header "execinfo.h" : YES 00:01:48.286 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.286 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.286 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.286 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.286 Run-time dependency openssl found: YES 3.0.9 00:01:48.286 Run-time dependency libpcap found: YES 1.10.4 00:01:48.286 Has header "pcap.h" with dependency libpcap: YES 00:01:48.286 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.286 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.286 Compiler for C supports arguments -Wformat: YES 00:01:48.286 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.286 Compiler for C supports arguments -Wformat-security: NO 00:01:48.286 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.286 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.286 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.286 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.286 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.286 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.286 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.286 Compiler for C supports arguments -Wundef: YES 00:01:48.286 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.286 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.287 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.287 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.287 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.287 Program objdump found: YES (/usr/bin/objdump) 00:01:48.287 Compiler for C supports arguments -mavx512f: YES 00:01:48.287 Checking if "AVX512 checking" compiles: YES 00:01:48.287 Fetching value of define "__SSE4_2__" : 1 00:01:48.287 Fetching value of define "__AES__" : 1 00:01:48.287 Fetching value of define "__AVX__" : 1 00:01:48.287 Fetching value of define "__AVX2__" : 1 00:01:48.287 Fetching value of define "__AVX512BW__" : 1 00:01:48.287 Fetching value of define "__AVX512CD__" : 1 00:01:48.287 Fetching value of define "__AVX512DQ__" : 1 00:01:48.287 Fetching value of define "__AVX512F__" : 1 00:01:48.287 Fetching value of define "__AVX512VL__" : 1 00:01:48.287 Fetching value of define "__PCLMUL__" : 1 00:01:48.287 Fetching value of define "__RDRND__" : 1 00:01:48.287 Fetching value of define "__RDSEED__" : 1 00:01:48.287 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.287 Fetching value of define "__znver1__" : (undefined) 00:01:48.287 Fetching value of define "__znver2__" : (undefined) 00:01:48.287 Fetching value of define "__znver3__" : (undefined) 00:01:48.287 Fetching value of define "__znver4__" : (undefined) 00:01:48.287 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.287 Message: lib/log: Defining dependency "log" 00:01:48.287 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.287 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.287 Checking for function "getentropy" : NO 00:01:48.287 Message: lib/eal: Defining dependency "eal" 00:01:48.287 Message: lib/ring: Defining dependency "ring" 00:01:48.287 Message: lib/rcu: Defining dependency "rcu" 00:01:48.287 Message: lib/mempool: Defining dependency "mempool" 00:01:48.287 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.287 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.287 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.287 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.287 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.287 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.287 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.287 Compiler for C supports arguments -mpclmul: YES 00:01:48.287 Compiler for C supports arguments -maes: YES 00:01:48.287 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.287 Compiler for C supports arguments -mavx512bw: YES 00:01:48.287 Compiler for C supports arguments -mavx512dq: YES 00:01:48.287 Compiler for C supports arguments -mavx512vl: YES 00:01:48.287 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.287 Compiler for C supports arguments -mavx2: YES 00:01:48.287 Compiler for C supports arguments -mavx: YES 00:01:48.287 Message: lib/net: Defining dependency "net" 00:01:48.287 Message: lib/meter: Defining dependency "meter" 00:01:48.287 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.287 Message: lib/pci: Defining dependency "pci" 00:01:48.287 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.287 Message: lib/hash: Defining dependency "hash" 00:01:48.287 Message: lib/timer: Defining dependency "timer" 00:01:48.287 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.287 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.287 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.287 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.287 Message: lib/power: Defining dependency "power" 00:01:48.287 Message: lib/reorder: Defining dependency "reorder" 00:01:48.287 Message: lib/security: Defining dependency "security" 00:01:48.287 Has header "linux/userfaultfd.h" : YES 00:01:48.287 Has header "linux/vduse.h" : YES 00:01:48.287 Message: lib/vhost: Defining dependency "vhost" 00:01:48.287 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.287 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.287 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.287 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.287 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.287 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.287 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.287 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.287 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.287 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.287 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.287 Configuring doxy-api-html.conf using configuration 00:01:48.287 Configuring doxy-api-man.conf using configuration 00:01:48.287 Program mandb found: YES (/usr/bin/mandb) 00:01:48.287 Program sphinx-build found: NO 00:01:48.287 Configuring rte_build_config.h using configuration 00:01:48.287 Message: 00:01:48.287 ================= 00:01:48.287 Applications Enabled 00:01:48.287 ================= 00:01:48.287 00:01:48.287 apps: 00:01:48.287 00:01:48.287 00:01:48.287 Message: 00:01:48.287 ================= 00:01:48.287 Libraries Enabled 00:01:48.287 ================= 00:01:48.287 00:01:48.287 libs: 00:01:48.287 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.287 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.287 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.287 00:01:48.287 Message: 00:01:48.287 =============== 00:01:48.287 Drivers Enabled 00:01:48.287 =============== 00:01:48.287 00:01:48.287 common: 00:01:48.287 00:01:48.287 bus: 00:01:48.287 pci, vdev, 00:01:48.287 mempool: 00:01:48.287 ring, 00:01:48.287 dma: 00:01:48.287 00:01:48.287 net: 00:01:48.287 00:01:48.287 crypto: 00:01:48.287 00:01:48.287 compress: 00:01:48.287 00:01:48.287 vdpa: 00:01:48.287 00:01:48.287 00:01:48.287 Message: 00:01:48.287 ================= 00:01:48.287 Content Skipped 00:01:48.287 ================= 00:01:48.287 00:01:48.287 apps: 00:01:48.287 dumpcap: explicitly disabled via build config 00:01:48.287 graph: explicitly disabled via build config 00:01:48.287 pdump: explicitly disabled via build config 00:01:48.287 proc-info: explicitly disabled via build config 00:01:48.287 test-acl: explicitly disabled via build config 00:01:48.287 test-bbdev: explicitly disabled via build config 00:01:48.287 test-cmdline: explicitly disabled via build config 00:01:48.287 test-compress-perf: explicitly disabled via build config 00:01:48.287 test-crypto-perf: explicitly disabled via build config 00:01:48.287 test-dma-perf: explicitly disabled via build config 00:01:48.287 test-eventdev: explicitly disabled via build config 00:01:48.287 test-fib: explicitly disabled via build config 00:01:48.287 test-flow-perf: explicitly disabled via build config 00:01:48.287 test-gpudev: explicitly disabled via build config 00:01:48.287 test-mldev: explicitly disabled via build config 00:01:48.287 test-pipeline: explicitly disabled via build config 00:01:48.287 test-pmd: explicitly disabled via build config 00:01:48.287 test-regex: explicitly disabled via build config 00:01:48.287 test-sad: explicitly disabled via build config 00:01:48.287 test-security-perf: explicitly disabled via build config 00:01:48.287 00:01:48.287 libs: 00:01:48.287 metrics: explicitly disabled via build config 00:01:48.287 acl: explicitly disabled via build config 00:01:48.287 bbdev: explicitly disabled via build config 00:01:48.287 bitratestats: explicitly disabled via build config 00:01:48.287 bpf: explicitly disabled via build config 00:01:48.287 cfgfile: explicitly disabled via build config 00:01:48.287 distributor: explicitly disabled via build config 00:01:48.287 efd: explicitly disabled via build config 00:01:48.287 eventdev: explicitly disabled via build config 00:01:48.287 dispatcher: explicitly disabled via build config 00:01:48.287 gpudev: explicitly disabled via build config 00:01:48.287 gro: explicitly disabled via build config 00:01:48.287 gso: explicitly disabled via build config 00:01:48.287 ip_frag: explicitly disabled via build config 00:01:48.287 jobstats: explicitly disabled via build config 00:01:48.287 latencystats: explicitly disabled via build config 00:01:48.287 lpm: explicitly disabled via build config 00:01:48.287 member: explicitly disabled via build config 00:01:48.287 pcapng: explicitly disabled via build config 00:01:48.287 rawdev: explicitly disabled via build config 00:01:48.287 regexdev: explicitly disabled via build config 00:01:48.287 mldev: explicitly disabled via build config 00:01:48.287 rib: explicitly disabled via build config 00:01:48.287 sched: explicitly disabled via build config 00:01:48.287 stack: explicitly disabled via build config 00:01:48.287 ipsec: explicitly disabled via build config 00:01:48.287 pdcp: explicitly disabled via build config 00:01:48.287 fib: explicitly disabled via build config 00:01:48.287 port: explicitly disabled via build config 00:01:48.287 pdump: explicitly disabled via build config 00:01:48.287 table: explicitly disabled via build config 00:01:48.287 pipeline: explicitly disabled via build config 00:01:48.287 graph: explicitly disabled via build config 00:01:48.287 node: explicitly disabled via build config 00:01:48.287 00:01:48.287 drivers: 00:01:48.287 common/cpt: not in enabled drivers build config 00:01:48.287 common/dpaax: not in enabled drivers build config 00:01:48.287 common/iavf: not in enabled drivers build config 00:01:48.287 common/idpf: not in enabled drivers build config 00:01:48.287 common/mvep: not in enabled drivers build config 00:01:48.287 common/octeontx: not in enabled drivers build config 00:01:48.287 bus/auxiliary: not in enabled drivers build config 00:01:48.287 bus/cdx: not in enabled drivers build config 00:01:48.287 bus/dpaa: not in enabled drivers build config 00:01:48.287 bus/fslmc: not in enabled drivers build config 00:01:48.287 bus/ifpga: not in enabled drivers build config 00:01:48.287 bus/platform: not in enabled drivers build config 00:01:48.287 bus/vmbus: not in enabled drivers build config 00:01:48.287 common/cnxk: not in enabled drivers build config 00:01:48.287 common/mlx5: not in enabled drivers build config 00:01:48.287 common/nfp: not in enabled drivers build config 00:01:48.287 common/qat: not in enabled drivers build config 00:01:48.287 common/sfc_efx: not in enabled drivers build config 00:01:48.288 mempool/bucket: not in enabled drivers build config 00:01:48.288 mempool/cnxk: not in enabled drivers build config 00:01:48.288 mempool/dpaa: not in enabled drivers build config 00:01:48.288 mempool/dpaa2: not in enabled drivers build config 00:01:48.288 mempool/octeontx: not in enabled drivers build config 00:01:48.288 mempool/stack: not in enabled drivers build config 00:01:48.288 dma/cnxk: not in enabled drivers build config 00:01:48.288 dma/dpaa: not in enabled drivers build config 00:01:48.288 dma/dpaa2: not in enabled drivers build config 00:01:48.288 dma/hisilicon: not in enabled drivers build config 00:01:48.288 dma/idxd: not in enabled drivers build config 00:01:48.288 dma/ioat: not in enabled drivers build config 00:01:48.288 dma/skeleton: not in enabled drivers build config 00:01:48.288 net/af_packet: not in enabled drivers build config 00:01:48.288 net/af_xdp: not in enabled drivers build config 00:01:48.288 net/ark: not in enabled drivers build config 00:01:48.288 net/atlantic: not in enabled drivers build config 00:01:48.288 net/avp: not in enabled drivers build config 00:01:48.288 net/axgbe: not in enabled drivers build config 00:01:48.288 net/bnx2x: not in enabled drivers build config 00:01:48.288 net/bnxt: not in enabled drivers build config 00:01:48.288 net/bonding: not in enabled drivers build config 00:01:48.288 net/cnxk: not in enabled drivers build config 00:01:48.288 net/cpfl: not in enabled drivers build config 00:01:48.288 net/cxgbe: not in enabled drivers build config 00:01:48.288 net/dpaa: not in enabled drivers build config 00:01:48.288 net/dpaa2: not in enabled drivers build config 00:01:48.288 net/e1000: not in enabled drivers build config 00:01:48.288 net/ena: not in enabled drivers build config 00:01:48.288 net/enetc: not in enabled drivers build config 00:01:48.288 net/enetfec: not in enabled drivers build config 00:01:48.288 net/enic: not in enabled drivers build config 00:01:48.288 net/failsafe: not in enabled drivers build config 00:01:48.288 net/fm10k: not in enabled drivers build config 00:01:48.288 net/gve: not in enabled drivers build config 00:01:48.288 net/hinic: not in enabled drivers build config 00:01:48.288 net/hns3: not in enabled drivers build config 00:01:48.288 net/i40e: not in enabled drivers build config 00:01:48.288 net/iavf: not in enabled drivers build config 00:01:48.288 net/ice: not in enabled drivers build config 00:01:48.288 net/idpf: not in enabled drivers build config 00:01:48.288 net/igc: not in enabled drivers build config 00:01:48.288 net/ionic: not in enabled drivers build config 00:01:48.288 net/ipn3ke: not in enabled drivers build config 00:01:48.288 net/ixgbe: not in enabled drivers build config 00:01:48.288 net/mana: not in enabled drivers build config 00:01:48.288 net/memif: not in enabled drivers build config 00:01:48.288 net/mlx4: not in enabled drivers build config 00:01:48.288 net/mlx5: not in enabled drivers build config 00:01:48.288 net/mvneta: not in enabled drivers build config 00:01:48.288 net/mvpp2: not in enabled drivers build config 00:01:48.288 net/netvsc: not in enabled drivers build config 00:01:48.288 net/nfb: not in enabled drivers build config 00:01:48.288 net/nfp: not in enabled drivers build config 00:01:48.288 net/ngbe: not in enabled drivers build config 00:01:48.288 net/null: not in enabled drivers build config 00:01:48.288 net/octeontx: not in enabled drivers build config 00:01:48.288 net/octeon_ep: not in enabled drivers build config 00:01:48.288 net/pcap: not in enabled drivers build config 00:01:48.288 net/pfe: not in enabled drivers build config 00:01:48.288 net/qede: not in enabled drivers build config 00:01:48.288 net/ring: not in enabled drivers build config 00:01:48.288 net/sfc: not in enabled drivers build config 00:01:48.288 net/softnic: not in enabled drivers build config 00:01:48.288 net/tap: not in enabled drivers build config 00:01:48.288 net/thunderx: not in enabled drivers build config 00:01:48.288 net/txgbe: not in enabled drivers build config 00:01:48.288 net/vdev_netvsc: not in enabled drivers build config 00:01:48.288 net/vhost: not in enabled drivers build config 00:01:48.288 net/virtio: not in enabled drivers build config 00:01:48.288 net/vmxnet3: not in enabled drivers build config 00:01:48.288 raw/*: missing internal dependency, "rawdev" 00:01:48.288 crypto/armv8: not in enabled drivers build config 00:01:48.288 crypto/bcmfs: not in enabled drivers build config 00:01:48.288 crypto/caam_jr: not in enabled drivers build config 00:01:48.288 crypto/ccp: not in enabled drivers build config 00:01:48.288 crypto/cnxk: not in enabled drivers build config 00:01:48.288 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.288 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.288 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.288 crypto/mlx5: not in enabled drivers build config 00:01:48.288 crypto/mvsam: not in enabled drivers build config 00:01:48.288 crypto/nitrox: not in enabled drivers build config 00:01:48.288 crypto/null: not in enabled drivers build config 00:01:48.288 crypto/octeontx: not in enabled drivers build config 00:01:48.288 crypto/openssl: not in enabled drivers build config 00:01:48.288 crypto/scheduler: not in enabled drivers build config 00:01:48.288 crypto/uadk: not in enabled drivers build config 00:01:48.288 crypto/virtio: not in enabled drivers build config 00:01:48.288 compress/isal: not in enabled drivers build config 00:01:48.288 compress/mlx5: not in enabled drivers build config 00:01:48.288 compress/octeontx: not in enabled drivers build config 00:01:48.288 compress/zlib: not in enabled drivers build config 00:01:48.288 regex/*: missing internal dependency, "regexdev" 00:01:48.288 ml/*: missing internal dependency, "mldev" 00:01:48.288 vdpa/ifc: not in enabled drivers build config 00:01:48.288 vdpa/mlx5: not in enabled drivers build config 00:01:48.288 vdpa/nfp: not in enabled drivers build config 00:01:48.288 vdpa/sfc: not in enabled drivers build config 00:01:48.288 event/*: missing internal dependency, "eventdev" 00:01:48.288 baseband/*: missing internal dependency, "bbdev" 00:01:48.288 gpu/*: missing internal dependency, "gpudev" 00:01:48.288 00:01:48.288 00:01:48.546 Build targets in project: 85 00:01:48.546 00:01:48.546 DPDK 23.11.0 00:01:48.546 00:01:48.546 User defined options 00:01:48.546 buildtype : debug 00:01:48.546 default_library : shared 00:01:48.546 libdir : lib 00:01:48.546 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:48.546 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.546 c_link_args : 00:01:48.546 cpu_instruction_set: native 00:01:48.547 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:48.547 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:48.547 enable_docs : false 00:01:48.547 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.547 enable_kmods : false 00:01:48.547 tests : false 00:01:48.547 00:01:48.547 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.121 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.121 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.121 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.121 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.121 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.121 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.121 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.121 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.121 [8/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.121 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.121 [10/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.121 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.121 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.121 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.121 [14/265] Linking static target lib/librte_kvargs.a 00:01:49.121 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.121 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.121 [17/265] Linking static target lib/librte_log.a 00:01:49.121 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.121 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.121 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.121 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.121 [22/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.121 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.121 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.378 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.639 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.639 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.639 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.639 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.639 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.639 [31/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:49.639 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.639 [33/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.639 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.639 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.639 [36/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.639 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.639 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.639 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.639 [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.639 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.639 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.639 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.639 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.639 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.639 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.639 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.639 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.639 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.639 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.639 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.639 [52/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.639 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.639 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.639 [55/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.639 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.639 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.639 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.639 [59/265] Linking static target lib/librte_telemetry.a 00:01:49.639 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.639 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.639 [62/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.639 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.639 [64/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.639 [65/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.639 [66/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.639 [67/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.639 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.639 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.639 [70/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.639 [71/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.639 [72/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.639 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.640 [74/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.640 [75/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.640 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.640 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.640 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.640 [79/265] Linking static target lib/librte_ring.a 00:01:49.640 [80/265] Linking static target lib/librte_pci.a 00:01:49.640 [81/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.640 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.640 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.640 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.640 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:49.640 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.640 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.640 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:49.640 [89/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:49.640 [90/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.640 [91/265] Linking static target lib/librte_meter.a 00:01:49.640 [92/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.640 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.899 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.899 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.899 [96/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.899 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.899 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.899 [99/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.899 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.899 [101/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.899 [102/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.899 [103/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.899 [104/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.899 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.899 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.899 [107/265] Linking static target lib/librte_mempool.a 00:01:49.899 [108/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.899 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.899 [110/265] Linking static target lib/librte_rcu.a 00:01:49.899 [111/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.899 [112/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.899 [113/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:49.899 [114/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.899 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.899 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.899 [117/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.899 [118/265] Linking static target lib/librte_net.a 00:01:49.899 [119/265] Linking static target lib/librte_eal.a 00:01:49.899 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:50.158 [121/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [122/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [123/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [124/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.158 [125/265] Linking static target lib/librte_mbuf.a 00:01:50.158 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.158 [127/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [128/265] Linking target lib/librte_log.so.24.0 00:01:50.158 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.158 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.158 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.158 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.158 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.158 [134/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.158 [135/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:50.158 [136/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:50.158 [137/265] Linking static target lib/librte_cmdline.a 00:01:50.158 [138/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.158 [140/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:50.158 [141/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.158 [143/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.158 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.158 [145/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.158 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.158 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:50.158 [148/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.158 [149/265] Linking static target lib/librte_timer.a 00:01:50.158 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.158 [151/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.158 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.158 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.417 [154/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.417 [155/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.417 [156/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.417 [157/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.417 [158/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.417 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.417 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.417 [161/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:50.417 [162/265] Linking static target lib/librte_dmadev.a 00:01:50.417 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.417 [164/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.417 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.417 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.417 [167/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.417 [168/265] Linking static target lib/librte_compressdev.a 00:01:50.417 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.417 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.417 [171/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.417 [172/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.417 [173/265] Linking static target lib/librte_power.a 00:01:50.417 [174/265] Linking target lib/librte_kvargs.so.24.0 00:01:50.417 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.417 [176/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.417 [177/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.417 [178/265] Linking target lib/librte_telemetry.so.24.0 00:01:50.417 [179/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.417 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.417 [181/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.417 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.417 [183/265] Linking static target lib/librte_security.a 00:01:50.417 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.417 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.417 [186/265] Linking static target lib/librte_reorder.a 00:01:50.417 [187/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.417 [188/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.417 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.417 [190/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:50.417 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.417 [192/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:50.417 [193/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.417 [194/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.417 [195/265] Linking static target lib/librte_hash.a 00:01:50.675 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.675 [197/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.675 [198/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.675 [199/265] Linking static target drivers/librte_bus_vdev.a 00:01:50.675 [200/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.675 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.675 [202/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.675 [203/265] Linking static target drivers/librte_bus_pci.a 00:01:50.675 [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.675 [205/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:50.675 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.675 [207/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.675 [208/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.675 [209/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.675 [210/265] Linking static target lib/librte_cryptodev.a 00:01:50.675 [211/265] Linking static target drivers/librte_mempool_ring.a 00:01:50.675 [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [213/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.191 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.191 [219/265] Linking static target lib/librte_ethdev.a 00:01:51.191 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.191 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.449 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.450 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.450 [224/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.385 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:52.385 [226/265] Linking static target lib/librte_vhost.a 00:01:52.643 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.627 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.890 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.424 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.424 [231/265] Linking target lib/librte_eal.so.24.0 00:02:02.424 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:02.681 [233/265] Linking target lib/librte_meter.so.24.0 00:02:02.681 [234/265] Linking target lib/librte_ring.so.24.0 00:02:02.681 [235/265] Linking target lib/librte_timer.so.24.0 00:02:02.681 [236/265] Linking target lib/librte_pci.so.24.0 00:02:02.681 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:02.681 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:02.681 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.681 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.681 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.681 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.681 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.681 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:02.681 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:02.681 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:02.939 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:02.939 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:02.939 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:02.939 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:03.196 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:03.196 [252/265] Linking target lib/librte_net.so.24.0 00:02:03.196 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:03.196 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:03.196 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:03.196 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:03.196 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:03.196 [258/265] Linking target lib/librte_cmdline.so.24.0 00:02:03.196 [259/265] Linking target lib/librte_hash.so.24.0 00:02:03.454 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:03.454 [261/265] Linking target lib/librte_security.so.24.0 00:02:03.454 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:03.454 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.454 [264/265] Linking target lib/librte_power.so.24.0 00:02:03.454 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:03.454 INFO: autodetecting backend as ninja 00:02:03.454 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:04.827 CC lib/ut_mock/mock.o 00:02:04.827 CC lib/log/log.o 00:02:04.827 CC lib/log/log_deprecated.o 00:02:04.827 CC lib/log/log_flags.o 00:02:04.827 CC lib/ut/ut.o 00:02:04.827 LIB libspdk_ut_mock.a 00:02:04.827 SO libspdk_ut_mock.so.6.0 00:02:04.827 LIB libspdk_ut.a 00:02:04.827 LIB libspdk_log.a 00:02:04.827 SO libspdk_ut.so.2.0 00:02:04.827 SYMLINK libspdk_ut_mock.so 00:02:04.827 SO libspdk_log.so.7.0 00:02:04.827 SYMLINK libspdk_ut.so 00:02:04.827 SYMLINK libspdk_log.so 00:02:05.085 CXX lib/trace_parser/trace.o 00:02:05.342 CC lib/util/base64.o 00:02:05.342 CC lib/util/bit_array.o 00:02:05.342 CC lib/util/cpuset.o 00:02:05.342 CC lib/util/crc16.o 00:02:05.342 CC lib/util/crc64.o 00:02:05.342 CC lib/util/crc32.o 00:02:05.342 CC lib/util/crc32c.o 00:02:05.342 CC lib/util/dif.o 00:02:05.343 CC lib/util/crc32_ieee.o 00:02:05.343 CC lib/util/fd.o 00:02:05.343 CC lib/ioat/ioat.o 00:02:05.343 CC lib/util/hexlify.o 00:02:05.343 CC lib/util/file.o 00:02:05.343 CC lib/util/iov.o 00:02:05.343 CC lib/util/math.o 00:02:05.343 CC lib/util/pipe.o 00:02:05.343 CC lib/util/strerror_tls.o 00:02:05.343 CC lib/util/string.o 00:02:05.343 CC lib/util/uuid.o 00:02:05.343 CC lib/util/zipf.o 00:02:05.343 CC lib/util/fd_group.o 00:02:05.343 CC lib/util/xor.o 00:02:05.343 CC lib/dma/dma.o 00:02:05.343 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.343 CC lib/vfio_user/host/vfio_user.o 00:02:05.343 LIB libspdk_dma.a 00:02:05.600 SO libspdk_dma.so.4.0 00:02:05.600 LIB libspdk_ioat.a 00:02:05.600 SO libspdk_ioat.so.7.0 00:02:05.600 SYMLINK libspdk_dma.so 00:02:05.600 LIB libspdk_vfio_user.a 00:02:05.600 SYMLINK libspdk_ioat.so 00:02:05.600 SO libspdk_vfio_user.so.5.0 00:02:05.600 LIB libspdk_util.a 00:02:05.600 SYMLINK libspdk_vfio_user.so 00:02:05.858 SO libspdk_util.so.9.0 00:02:05.858 SYMLINK libspdk_util.so 00:02:05.858 LIB libspdk_trace_parser.a 00:02:05.858 SO libspdk_trace_parser.so.5.0 00:02:06.117 SYMLINK libspdk_trace_parser.so 00:02:06.117 CC lib/idxd/idxd.o 00:02:06.117 CC lib/idxd/idxd_user.o 00:02:06.117 CC lib/conf/conf.o 00:02:06.117 CC lib/vmd/vmd.o 00:02:06.117 CC lib/vmd/led.o 00:02:06.117 CC lib/env_dpdk/env.o 00:02:06.117 CC lib/json/json_parse.o 00:02:06.117 CC lib/env_dpdk/memory.o 00:02:06.117 CC lib/env_dpdk/pci.o 00:02:06.117 CC lib/json/json_util.o 00:02:06.117 CC lib/env_dpdk/init.o 00:02:06.117 CC lib/json/json_write.o 00:02:06.117 CC lib/env_dpdk/threads.o 00:02:06.117 CC lib/env_dpdk/pci_ioat.o 00:02:06.117 CC lib/env_dpdk/pci_virtio.o 00:02:06.117 CC lib/env_dpdk/pci_vmd.o 00:02:06.117 CC lib/env_dpdk/pci_idxd.o 00:02:06.117 CC lib/env_dpdk/pci_event.o 00:02:06.117 CC lib/rdma/common.o 00:02:06.117 CC lib/env_dpdk/sigbus_handler.o 00:02:06.117 CC lib/rdma/rdma_verbs.o 00:02:06.117 CC lib/env_dpdk/pci_dpdk.o 00:02:06.117 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.117 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.375 LIB libspdk_conf.a 00:02:06.375 SO libspdk_conf.so.6.0 00:02:06.375 SYMLINK libspdk_conf.so 00:02:06.633 LIB libspdk_json.a 00:02:06.633 LIB libspdk_rdma.a 00:02:06.633 SO libspdk_json.so.6.0 00:02:06.633 SO libspdk_rdma.so.6.0 00:02:06.633 SYMLINK libspdk_json.so 00:02:06.633 SYMLINK libspdk_rdma.so 00:02:06.633 LIB libspdk_idxd.a 00:02:06.633 SO libspdk_idxd.so.12.0 00:02:06.633 LIB libspdk_vmd.a 00:02:06.633 SO libspdk_vmd.so.6.0 00:02:06.633 SYMLINK libspdk_idxd.so 00:02:06.890 SYMLINK libspdk_vmd.so 00:02:06.890 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.890 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.890 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.890 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.148 LIB libspdk_jsonrpc.a 00:02:07.148 LIB libspdk_env_dpdk.a 00:02:07.148 SO libspdk_jsonrpc.so.6.0 00:02:07.406 SO libspdk_env_dpdk.so.14.0 00:02:07.406 SYMLINK libspdk_jsonrpc.so 00:02:07.406 SYMLINK libspdk_env_dpdk.so 00:02:07.664 CC lib/rpc/rpc.o 00:02:07.922 LIB libspdk_rpc.a 00:02:07.922 SO libspdk_rpc.so.6.0 00:02:07.922 SYMLINK libspdk_rpc.so 00:02:08.181 CC lib/trace/trace.o 00:02:08.181 CC lib/trace/trace_flags.o 00:02:08.181 CC lib/trace/trace_rpc.o 00:02:08.181 CC lib/notify/notify_rpc.o 00:02:08.181 CC lib/notify/notify.o 00:02:08.181 CC lib/keyring/keyring.o 00:02:08.181 CC lib/keyring/keyring_rpc.o 00:02:08.438 LIB libspdk_notify.a 00:02:08.438 LIB libspdk_trace.a 00:02:08.438 SO libspdk_trace.so.10.0 00:02:08.438 LIB libspdk_keyring.a 00:02:08.438 SO libspdk_notify.so.6.0 00:02:08.438 SO libspdk_keyring.so.1.0 00:02:08.438 SYMLINK libspdk_trace.so 00:02:08.438 SYMLINK libspdk_notify.so 00:02:08.696 SYMLINK libspdk_keyring.so 00:02:08.956 CC lib/sock/sock.o 00:02:08.956 CC lib/sock/sock_rpc.o 00:02:08.956 CC lib/thread/thread.o 00:02:08.956 CC lib/thread/iobuf.o 00:02:09.215 LIB libspdk_sock.a 00:02:09.215 SO libspdk_sock.so.9.0 00:02:09.215 SYMLINK libspdk_sock.so 00:02:09.473 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:09.473 CC lib/nvme/nvme_ctrlr.o 00:02:09.473 CC lib/nvme/nvme_fabric.o 00:02:09.473 CC lib/nvme/nvme_ns_cmd.o 00:02:09.473 CC lib/nvme/nvme_ns.o 00:02:09.473 CC lib/nvme/nvme_pcie_common.o 00:02:09.473 CC lib/nvme/nvme_pcie.o 00:02:09.473 CC lib/nvme/nvme_qpair.o 00:02:09.473 CC lib/nvme/nvme_quirks.o 00:02:09.474 CC lib/nvme/nvme.o 00:02:09.474 CC lib/nvme/nvme_discovery.o 00:02:09.474 CC lib/nvme/nvme_transport.o 00:02:09.474 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.474 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.474 CC lib/nvme/nvme_tcp.o 00:02:09.474 CC lib/nvme/nvme_zns.o 00:02:09.474 CC lib/nvme/nvme_opal.o 00:02:09.474 CC lib/nvme/nvme_io_msg.o 00:02:09.474 CC lib/nvme/nvme_poll_group.o 00:02:09.474 CC lib/nvme/nvme_stubs.o 00:02:09.474 CC lib/nvme/nvme_auth.o 00:02:09.474 CC lib/nvme/nvme_cuse.o 00:02:09.474 CC lib/nvme/nvme_rdma.o 00:02:10.041 LIB libspdk_thread.a 00:02:10.041 SO libspdk_thread.so.10.0 00:02:10.041 SYMLINK libspdk_thread.so 00:02:10.300 CC lib/accel/accel.o 00:02:10.300 CC lib/accel/accel_sw.o 00:02:10.300 CC lib/accel/accel_rpc.o 00:02:10.300 CC lib/virtio/virtio.o 00:02:10.300 CC lib/virtio/virtio_vhost_user.o 00:02:10.300 CC lib/virtio/virtio_vfio_user.o 00:02:10.300 CC lib/virtio/virtio_pci.o 00:02:10.300 CC lib/blob/blobstore.o 00:02:10.300 CC lib/blob/request.o 00:02:10.300 CC lib/blob/zeroes.o 00:02:10.300 CC lib/blob/blob_bs_dev.o 00:02:10.300 CC lib/init/json_config.o 00:02:10.300 CC lib/init/rpc.o 00:02:10.300 CC lib/init/subsystem.o 00:02:10.300 CC lib/init/subsystem_rpc.o 00:02:10.579 LIB libspdk_init.a 00:02:10.579 LIB libspdk_virtio.a 00:02:10.579 SO libspdk_init.so.5.0 00:02:10.579 SO libspdk_virtio.so.7.0 00:02:10.883 SYMLINK libspdk_init.so 00:02:10.883 SYMLINK libspdk_virtio.so 00:02:11.142 CC lib/event/app.o 00:02:11.142 CC lib/event/reactor.o 00:02:11.142 CC lib/event/scheduler_static.o 00:02:11.142 CC lib/event/log_rpc.o 00:02:11.142 CC lib/event/app_rpc.o 00:02:11.142 LIB libspdk_accel.a 00:02:11.142 SO libspdk_accel.so.15.0 00:02:11.142 SYMLINK libspdk_accel.so 00:02:11.142 LIB libspdk_nvme.a 00:02:11.401 SO libspdk_nvme.so.13.0 00:02:11.401 LIB libspdk_event.a 00:02:11.401 SO libspdk_event.so.13.0 00:02:11.660 SYMLINK libspdk_event.so 00:02:11.660 CC lib/bdev/bdev.o 00:02:11.660 CC lib/bdev/bdev_rpc.o 00:02:11.660 CC lib/bdev/part.o 00:02:11.660 CC lib/bdev/scsi_nvme.o 00:02:11.660 CC lib/bdev/bdev_zone.o 00:02:11.660 SYMLINK libspdk_nvme.so 00:02:12.597 LIB libspdk_blob.a 00:02:12.597 SO libspdk_blob.so.11.0 00:02:12.597 SYMLINK libspdk_blob.so 00:02:12.855 CC lib/lvol/lvol.o 00:02:12.855 CC lib/blobfs/blobfs.o 00:02:12.855 CC lib/blobfs/tree.o 00:02:13.422 LIB libspdk_bdev.a 00:02:13.422 SO libspdk_bdev.so.15.0 00:02:13.422 SYMLINK libspdk_bdev.so 00:02:13.422 LIB libspdk_blobfs.a 00:02:13.681 LIB libspdk_lvol.a 00:02:13.681 SO libspdk_blobfs.so.10.0 00:02:13.681 SO libspdk_lvol.so.10.0 00:02:13.681 SYMLINK libspdk_blobfs.so 00:02:13.681 SYMLINK libspdk_lvol.so 00:02:13.943 CC lib/nbd/nbd.o 00:02:13.943 CC lib/nbd/nbd_rpc.o 00:02:13.943 CC lib/nvmf/ctrlr.o 00:02:13.943 CC lib/nvmf/ctrlr_bdev.o 00:02:13.943 CC lib/nvmf/ctrlr_discovery.o 00:02:13.943 CC lib/nvmf/subsystem.o 00:02:13.943 CC lib/nvmf/nvmf.o 00:02:13.943 CC lib/nvmf/nvmf_rpc.o 00:02:13.943 CC lib/nvmf/transport.o 00:02:13.943 CC lib/nvmf/tcp.o 00:02:13.943 CC lib/nvmf/stubs.o 00:02:13.943 CC lib/nvmf/mdns_server.o 00:02:13.943 CC lib/nvmf/rdma.o 00:02:13.943 CC lib/nvmf/auth.o 00:02:13.943 CC lib/ftl/ftl_core.o 00:02:13.943 CC lib/ftl/ftl_layout.o 00:02:13.943 CC lib/ftl/ftl_init.o 00:02:13.943 CC lib/ftl/ftl_debug.o 00:02:13.943 CC lib/ftl/ftl_io.o 00:02:13.943 CC lib/ftl/ftl_l2p_flat.o 00:02:13.943 CC lib/ftl/ftl_sb.o 00:02:13.943 CC lib/ftl/ftl_l2p.o 00:02:13.943 CC lib/ftl/ftl_nv_cache.o 00:02:13.943 CC lib/scsi/dev.o 00:02:13.943 CC lib/scsi/port.o 00:02:13.943 CC lib/scsi/lun.o 00:02:13.943 CC lib/ftl/ftl_band.o 00:02:13.943 CC lib/ftl/ftl_band_ops.o 00:02:13.943 CC lib/scsi/scsi.o 00:02:13.943 CC lib/scsi/scsi_pr.o 00:02:13.943 CC lib/scsi/scsi_bdev.o 00:02:13.943 CC lib/ftl/ftl_writer.o 00:02:13.943 CC lib/scsi/scsi_rpc.o 00:02:13.943 CC lib/ublk/ublk.o 00:02:13.943 CC lib/ublk/ublk_rpc.o 00:02:13.943 CC lib/scsi/task.o 00:02:13.943 CC lib/ftl/ftl_reloc.o 00:02:13.943 CC lib/ftl/ftl_rq.o 00:02:13.943 CC lib/ftl/ftl_l2p_cache.o 00:02:13.943 CC lib/ftl/ftl_p2l.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.943 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.944 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.944 CC lib/ftl/utils/ftl_conf.o 00:02:13.944 CC lib/ftl/utils/ftl_md.o 00:02:13.944 CC lib/ftl/utils/ftl_mempool.o 00:02:13.944 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.944 CC lib/ftl/utils/ftl_property.o 00:02:13.944 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.944 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.944 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.944 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.944 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.944 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.944 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.944 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.944 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.944 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.944 CC lib/ftl/base/ftl_base_dev.o 00:02:13.944 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.944 CC lib/ftl/ftl_trace.o 00:02:14.510 LIB libspdk_nbd.a 00:02:14.510 SO libspdk_nbd.so.7.0 00:02:14.510 SYMLINK libspdk_nbd.so 00:02:14.510 LIB libspdk_scsi.a 00:02:14.510 SO libspdk_scsi.so.9.0 00:02:14.510 LIB libspdk_ublk.a 00:02:14.768 SYMLINK libspdk_scsi.so 00:02:14.768 SO libspdk_ublk.so.3.0 00:02:14.768 SYMLINK libspdk_ublk.so 00:02:14.768 LIB libspdk_ftl.a 00:02:15.026 CC lib/vhost/vhost.o 00:02:15.026 CC lib/vhost/vhost_scsi.o 00:02:15.026 CC lib/vhost/vhost_rpc.o 00:02:15.026 CC lib/iscsi/conn.o 00:02:15.026 CC lib/vhost/rte_vhost_user.o 00:02:15.026 CC lib/vhost/vhost_blk.o 00:02:15.026 CC lib/iscsi/init_grp.o 00:02:15.026 CC lib/iscsi/iscsi.o 00:02:15.026 CC lib/iscsi/md5.o 00:02:15.026 CC lib/iscsi/param.o 00:02:15.026 CC lib/iscsi/tgt_node.o 00:02:15.026 CC lib/iscsi/portal_grp.o 00:02:15.026 CC lib/iscsi/task.o 00:02:15.026 CC lib/iscsi/iscsi_subsystem.o 00:02:15.026 CC lib/iscsi/iscsi_rpc.o 00:02:15.026 SO libspdk_ftl.so.9.0 00:02:15.286 SYMLINK libspdk_ftl.so 00:02:15.543 LIB libspdk_nvmf.a 00:02:15.802 SO libspdk_nvmf.so.18.0 00:02:15.802 LIB libspdk_vhost.a 00:02:15.802 SO libspdk_vhost.so.8.0 00:02:15.802 SYMLINK libspdk_nvmf.so 00:02:15.802 SYMLINK libspdk_vhost.so 00:02:16.061 LIB libspdk_iscsi.a 00:02:16.061 SO libspdk_iscsi.so.8.0 00:02:16.320 SYMLINK libspdk_iscsi.so 00:02:16.887 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.887 LIB libspdk_env_dpdk_rpc.a 00:02:16.887 CC module/accel/error/accel_error.o 00:02:16.887 CC module/accel/error/accel_error_rpc.o 00:02:16.887 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.887 CC module/accel/ioat/accel_ioat.o 00:02:16.887 CC module/blob/bdev/blob_bdev.o 00:02:16.887 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.887 CC module/accel/dsa/accel_dsa.o 00:02:16.887 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.887 CC module/sock/posix/posix.o 00:02:16.887 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.887 CC module/keyring/file/keyring.o 00:02:16.887 CC module/keyring/file/keyring_rpc.o 00:02:16.887 CC module/accel/iaa/accel_iaa.o 00:02:16.887 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.887 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.887 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.887 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.145 LIB libspdk_scheduler_gscheduler.a 00:02:17.145 LIB libspdk_keyring_file.a 00:02:17.145 LIB libspdk_accel_error.a 00:02:17.145 LIB libspdk_scheduler_dynamic.a 00:02:17.145 LIB libspdk_accel_ioat.a 00:02:17.145 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.145 SO libspdk_scheduler_gscheduler.so.4.0 00:02:17.145 SO libspdk_keyring_file.so.1.0 00:02:17.145 LIB libspdk_accel_iaa.a 00:02:17.145 SO libspdk_accel_ioat.so.6.0 00:02:17.145 SO libspdk_scheduler_dynamic.so.4.0 00:02:17.145 SO libspdk_accel_error.so.2.0 00:02:17.146 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:17.146 LIB libspdk_accel_dsa.a 00:02:17.146 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.146 LIB libspdk_blob_bdev.a 00:02:17.146 SO libspdk_accel_iaa.so.3.0 00:02:17.146 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.146 SYMLINK libspdk_keyring_file.so 00:02:17.146 SO libspdk_accel_dsa.so.5.0 00:02:17.146 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.146 SYMLINK libspdk_accel_error.so 00:02:17.146 SYMLINK libspdk_accel_ioat.so 00:02:17.146 SO libspdk_blob_bdev.so.11.0 00:02:17.146 SYMLINK libspdk_accel_iaa.so 00:02:17.146 SYMLINK libspdk_accel_dsa.so 00:02:17.146 SYMLINK libspdk_blob_bdev.so 00:02:17.404 LIB libspdk_sock_posix.a 00:02:17.662 SO libspdk_sock_posix.so.6.0 00:02:17.662 SYMLINK libspdk_sock_posix.so 00:02:17.662 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.662 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.662 CC module/bdev/error/vbdev_error.o 00:02:17.662 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.662 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.662 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.662 CC module/bdev/null/bdev_null_rpc.o 00:02:17.662 CC module/bdev/nvme/bdev_nvme.o 00:02:17.662 CC module/bdev/aio/bdev_aio.o 00:02:17.662 CC module/bdev/null/bdev_null.o 00:02:17.662 CC module/bdev/nvme/nvme_rpc.o 00:02:17.662 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.662 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.662 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.662 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.662 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.662 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.662 CC module/bdev/nvme/vbdev_opal.o 00:02:17.662 CC module/bdev/gpt/gpt.o 00:02:17.662 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.662 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.662 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.662 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.662 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.662 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.662 CC module/bdev/delay/vbdev_delay.o 00:02:17.662 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.662 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.662 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.662 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.662 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.662 CC module/bdev/malloc/bdev_malloc.o 00:02:17.662 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.662 CC module/bdev/ftl/bdev_ftl.o 00:02:17.662 CC module/bdev/raid/bdev_raid.o 00:02:17.662 CC module/bdev/split/vbdev_split.o 00:02:17.662 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.662 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.662 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.662 CC module/bdev/raid/raid0.o 00:02:17.662 CC module/bdev/raid/raid1.o 00:02:17.662 CC module/bdev/raid/concat.o 00:02:17.920 LIB libspdk_blobfs_bdev.a 00:02:17.920 LIB libspdk_bdev_error.a 00:02:17.920 LIB libspdk_bdev_split.a 00:02:18.178 SO libspdk_blobfs_bdev.so.6.0 00:02:18.178 LIB libspdk_bdev_gpt.a 00:02:18.178 LIB libspdk_bdev_passthru.a 00:02:18.178 SO libspdk_bdev_error.so.6.0 00:02:18.178 LIB libspdk_bdev_zone_block.a 00:02:18.178 SO libspdk_bdev_gpt.so.6.0 00:02:18.178 SO libspdk_bdev_split.so.6.0 00:02:18.178 LIB libspdk_bdev_aio.a 00:02:18.178 SO libspdk_bdev_zone_block.so.6.0 00:02:18.178 SO libspdk_bdev_passthru.so.6.0 00:02:18.178 LIB libspdk_bdev_malloc.a 00:02:18.178 SYMLINK libspdk_blobfs_bdev.so 00:02:18.178 LIB libspdk_bdev_delay.a 00:02:18.178 SYMLINK libspdk_bdev_error.so 00:02:18.178 LIB libspdk_bdev_null.a 00:02:18.178 SO libspdk_bdev_aio.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_gpt.so 00:02:18.178 LIB libspdk_bdev_iscsi.a 00:02:18.178 SO libspdk_bdev_malloc.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_split.so 00:02:18.178 LIB libspdk_bdev_ftl.a 00:02:18.178 SO libspdk_bdev_null.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_zone_block.so 00:02:18.178 SO libspdk_bdev_delay.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_passthru.so 00:02:18.178 SO libspdk_bdev_iscsi.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_aio.so 00:02:18.178 SO libspdk_bdev_ftl.so.6.0 00:02:18.178 SYMLINK libspdk_bdev_malloc.so 00:02:18.178 SYMLINK libspdk_bdev_delay.so 00:02:18.178 LIB libspdk_bdev_lvol.a 00:02:18.178 SYMLINK libspdk_bdev_null.so 00:02:18.178 SYMLINK libspdk_bdev_iscsi.so 00:02:18.178 SYMLINK libspdk_bdev_ftl.so 00:02:18.178 LIB libspdk_bdev_virtio.a 00:02:18.178 SO libspdk_bdev_lvol.so.6.0 00:02:18.436 SO libspdk_bdev_virtio.so.6.0 00:02:18.436 SYMLINK libspdk_bdev_lvol.so 00:02:18.436 SYMLINK libspdk_bdev_virtio.so 00:02:18.436 LIB libspdk_bdev_raid.a 00:02:18.694 SO libspdk_bdev_raid.so.6.0 00:02:18.694 SYMLINK libspdk_bdev_raid.so 00:02:19.261 LIB libspdk_bdev_nvme.a 00:02:19.519 SO libspdk_bdev_nvme.so.7.0 00:02:19.519 SYMLINK libspdk_bdev_nvme.so 00:02:20.453 CC module/event/subsystems/keyring/keyring.o 00:02:20.453 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.453 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.453 CC module/event/subsystems/sock/sock.o 00:02:20.453 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.453 CC module/event/subsystems/vmd/vmd.o 00:02:20.453 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.453 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.453 LIB libspdk_event_keyring.a 00:02:20.453 LIB libspdk_event_sock.a 00:02:20.453 SO libspdk_event_keyring.so.1.0 00:02:20.453 SO libspdk_event_sock.so.5.0 00:02:20.453 LIB libspdk_event_iobuf.a 00:02:20.453 LIB libspdk_event_vhost_blk.a 00:02:20.453 LIB libspdk_event_vmd.a 00:02:20.453 LIB libspdk_event_scheduler.a 00:02:20.453 SO libspdk_event_iobuf.so.3.0 00:02:20.453 SO libspdk_event_vhost_blk.so.3.0 00:02:20.453 SO libspdk_event_vmd.so.6.0 00:02:20.453 SYMLINK libspdk_event_keyring.so 00:02:20.453 SO libspdk_event_scheduler.so.4.0 00:02:20.453 SYMLINK libspdk_event_sock.so 00:02:20.453 SYMLINK libspdk_event_iobuf.so 00:02:20.453 SYMLINK libspdk_event_vhost_blk.so 00:02:20.453 SYMLINK libspdk_event_vmd.so 00:02:20.453 SYMLINK libspdk_event_scheduler.so 00:02:20.711 CC module/event/subsystems/accel/accel.o 00:02:20.969 LIB libspdk_event_accel.a 00:02:20.969 SO libspdk_event_accel.so.6.0 00:02:20.969 SYMLINK libspdk_event_accel.so 00:02:21.534 CC module/event/subsystems/bdev/bdev.o 00:02:21.534 LIB libspdk_event_bdev.a 00:02:21.534 SO libspdk_event_bdev.so.6.0 00:02:21.534 SYMLINK libspdk_event_bdev.so 00:02:22.099 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.099 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.099 CC module/event/subsystems/ublk/ublk.o 00:02:22.099 CC module/event/subsystems/scsi/scsi.o 00:02:22.099 CC module/event/subsystems/nbd/nbd.o 00:02:22.099 LIB libspdk_event_nbd.a 00:02:22.099 LIB libspdk_event_ublk.a 00:02:22.099 LIB libspdk_event_scsi.a 00:02:22.099 SO libspdk_event_nbd.so.6.0 00:02:22.099 SO libspdk_event_ublk.so.3.0 00:02:22.099 LIB libspdk_event_nvmf.a 00:02:22.099 SO libspdk_event_scsi.so.6.0 00:02:22.099 SYMLINK libspdk_event_nbd.so 00:02:22.099 SYMLINK libspdk_event_ublk.so 00:02:22.357 SO libspdk_event_nvmf.so.6.0 00:02:22.357 SYMLINK libspdk_event_scsi.so 00:02:22.357 SYMLINK libspdk_event_nvmf.so 00:02:22.616 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.616 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.875 LIB libspdk_event_vhost_scsi.a 00:02:22.875 LIB libspdk_event_iscsi.a 00:02:22.875 SO libspdk_event_vhost_scsi.so.3.0 00:02:22.875 SO libspdk_event_iscsi.so.6.0 00:02:22.875 SYMLINK libspdk_event_vhost_scsi.so 00:02:22.875 SYMLINK libspdk_event_iscsi.so 00:02:23.133 SO libspdk.so.6.0 00:02:23.133 SYMLINK libspdk.so 00:02:23.396 CC test/rpc_client/rpc_client_test.o 00:02:23.396 TEST_HEADER include/spdk/accel_module.h 00:02:23.396 TEST_HEADER include/spdk/accel.h 00:02:23.396 TEST_HEADER include/spdk/assert.h 00:02:23.396 TEST_HEADER include/spdk/barrier.h 00:02:23.396 TEST_HEADER include/spdk/base64.h 00:02:23.396 TEST_HEADER include/spdk/bdev.h 00:02:23.396 TEST_HEADER include/spdk/bdev_module.h 00:02:23.396 TEST_HEADER include/spdk/bit_array.h 00:02:23.396 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.396 TEST_HEADER include/spdk/bit_pool.h 00:02:23.396 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.396 CC app/trace_record/trace_record.o 00:02:23.396 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.396 TEST_HEADER include/spdk/blobfs.h 00:02:23.396 TEST_HEADER include/spdk/conf.h 00:02:23.396 TEST_HEADER include/spdk/blob.h 00:02:23.396 TEST_HEADER include/spdk/config.h 00:02:23.396 TEST_HEADER include/spdk/cpuset.h 00:02:23.396 TEST_HEADER include/spdk/crc16.h 00:02:23.396 TEST_HEADER include/spdk/crc32.h 00:02:23.396 TEST_HEADER include/spdk/crc64.h 00:02:23.396 CC app/spdk_nvme_perf/perf.o 00:02:23.396 TEST_HEADER include/spdk/dif.h 00:02:23.396 CXX app/trace/trace.o 00:02:23.396 TEST_HEADER include/spdk/dma.h 00:02:23.396 TEST_HEADER include/spdk/endian.h 00:02:23.396 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.396 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.396 CC app/spdk_top/spdk_top.o 00:02:23.396 TEST_HEADER include/spdk/env.h 00:02:23.396 CC app/spdk_lspci/spdk_lspci.o 00:02:23.396 TEST_HEADER include/spdk/event.h 00:02:23.396 TEST_HEADER include/spdk/fd_group.h 00:02:23.396 TEST_HEADER include/spdk/fd.h 00:02:23.396 CC app/spdk_nvme_identify/identify.o 00:02:23.396 TEST_HEADER include/spdk/file.h 00:02:23.396 TEST_HEADER include/spdk/ftl.h 00:02:23.396 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.396 TEST_HEADER include/spdk/hexlify.h 00:02:23.396 TEST_HEADER include/spdk/histogram_data.h 00:02:23.396 TEST_HEADER include/spdk/idxd.h 00:02:23.396 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.396 TEST_HEADER include/spdk/init.h 00:02:23.396 CC app/nvmf_tgt/nvmf_main.o 00:02:23.396 TEST_HEADER include/spdk/ioat.h 00:02:23.396 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.396 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.396 CC app/spdk_dd/spdk_dd.o 00:02:23.396 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.396 CC app/vhost/vhost.o 00:02:23.396 TEST_HEADER include/spdk/json.h 00:02:23.396 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.396 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.658 TEST_HEADER include/spdk/keyring.h 00:02:23.658 TEST_HEADER include/spdk/keyring_module.h 00:02:23.658 TEST_HEADER include/spdk/likely.h 00:02:23.658 TEST_HEADER include/spdk/log.h 00:02:23.658 TEST_HEADER include/spdk/lvol.h 00:02:23.658 CC test/app/histogram_perf/histogram_perf.o 00:02:23.658 TEST_HEADER include/spdk/memory.h 00:02:23.658 TEST_HEADER include/spdk/mmio.h 00:02:23.658 TEST_HEADER include/spdk/nbd.h 00:02:23.658 TEST_HEADER include/spdk/notify.h 00:02:23.658 TEST_HEADER include/spdk/nvme.h 00:02:23.658 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.658 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.658 CC test/event/reactor_perf/reactor_perf.o 00:02:23.658 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.658 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.658 CC test/thread/poller_perf/poller_perf.o 00:02:23.658 CC test/env/memory/memory_ut.o 00:02:23.658 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.658 CC app/spdk_tgt/spdk_tgt.o 00:02:23.658 CC test/app/jsoncat/jsoncat.o 00:02:23.658 CC test/event/reactor/reactor.o 00:02:23.658 CC test/nvme/overhead/overhead.o 00:02:23.658 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.658 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.658 CC test/nvme/fdp/fdp.o 00:02:23.658 CC test/env/vtophys/vtophys.o 00:02:23.658 CC test/env/pci/pci_ut.o 00:02:23.658 CC test/app/stub/stub.o 00:02:23.658 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.658 CC test/event/event_perf/event_perf.o 00:02:23.658 CC test/nvme/aer/aer.o 00:02:23.658 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.658 CC test/nvme/startup/startup.o 00:02:23.658 CC test/nvme/reserve/reserve.o 00:02:23.658 CC examples/nvme/hotplug/hotplug.o 00:02:23.658 CC test/nvme/sgl/sgl.o 00:02:23.658 TEST_HEADER include/spdk/nvmf.h 00:02:23.658 CC examples/sock/hello_world/hello_sock.o 00:02:23.658 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.658 CC test/nvme/simple_copy/simple_copy.o 00:02:23.658 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.658 CC examples/vmd/led/led.o 00:02:23.658 CC test/nvme/err_injection/err_injection.o 00:02:23.658 TEST_HEADER include/spdk/opal.h 00:02:23.658 CC test/nvme/compliance/nvme_compliance.o 00:02:23.658 CC examples/idxd/perf/perf.o 00:02:23.658 CC examples/nvme/reconnect/reconnect.o 00:02:23.658 TEST_HEADER include/spdk/opal_spec.h 00:02:23.658 TEST_HEADER include/spdk/pci_ids.h 00:02:23.658 CC test/nvme/e2edp/nvme_dp.o 00:02:23.658 CC test/nvme/boot_partition/boot_partition.o 00:02:23.658 CC test/nvme/connect_stress/connect_stress.o 00:02:23.658 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.658 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:23.658 CC test/nvme/reset/reset.o 00:02:23.658 CC examples/util/zipf/zipf.o 00:02:23.658 CC test/nvme/cuse/cuse.o 00:02:23.658 CC examples/nvme/arbitration/arbitration.o 00:02:23.658 TEST_HEADER include/spdk/pipe.h 00:02:23.658 CC examples/vmd/lsvmd/lsvmd.o 00:02:23.658 CC examples/accel/perf/accel_perf.o 00:02:23.658 CC examples/ioat/perf/perf.o 00:02:23.658 CC app/fio/nvme/fio_plugin.o 00:02:23.658 CC test/app/bdev_svc/bdev_svc.o 00:02:23.658 CC test/dma/test_dma/test_dma.o 00:02:23.658 TEST_HEADER include/spdk/queue.h 00:02:23.658 CC examples/ioat/verify/verify.o 00:02:23.658 TEST_HEADER include/spdk/reduce.h 00:02:23.658 CC examples/nvme/hello_world/hello_world.o 00:02:23.658 CC test/event/app_repeat/app_repeat.o 00:02:23.658 TEST_HEADER include/spdk/rpc.h 00:02:23.658 TEST_HEADER include/spdk/scheduler.h 00:02:23.658 TEST_HEADER include/spdk/scsi.h 00:02:23.658 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.658 CC test/blobfs/mkfs/mkfs.o 00:02:23.658 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.658 TEST_HEADER include/spdk/sock.h 00:02:23.658 CC test/bdev/bdevio/bdevio.o 00:02:23.658 CC test/accel/dif/dif.o 00:02:23.658 CC test/event/scheduler/scheduler.o 00:02:23.658 TEST_HEADER include/spdk/stdinc.h 00:02:23.658 CC examples/nvmf/nvmf/nvmf.o 00:02:23.658 TEST_HEADER include/spdk/string.h 00:02:23.658 TEST_HEADER include/spdk/thread.h 00:02:23.658 CC examples/blob/hello_world/hello_blob.o 00:02:23.658 CC examples/thread/thread/thread_ex.o 00:02:23.658 TEST_HEADER include/spdk/trace.h 00:02:23.658 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.658 TEST_HEADER include/spdk/trace_parser.h 00:02:23.658 TEST_HEADER include/spdk/tree.h 00:02:23.658 TEST_HEADER include/spdk/ublk.h 00:02:23.658 CC examples/blob/cli/blobcli.o 00:02:23.658 TEST_HEADER include/spdk/util.h 00:02:23.658 TEST_HEADER include/spdk/uuid.h 00:02:23.658 TEST_HEADER include/spdk/version.h 00:02:23.658 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.658 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.658 LINK rpc_client_test 00:02:23.658 TEST_HEADER include/spdk/vhost.h 00:02:23.658 TEST_HEADER include/spdk/vmd.h 00:02:23.658 TEST_HEADER include/spdk/xor.h 00:02:23.658 TEST_HEADER include/spdk/zipf.h 00:02:23.658 CXX test/cpp_headers/accel.o 00:02:23.658 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.658 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.919 LINK spdk_lspci 00:02:23.919 LINK histogram_perf 00:02:23.919 CC test/lvol/esnap/esnap.o 00:02:23.919 LINK nvmf_tgt 00:02:23.919 LINK spdk_nvme_discover 00:02:23.919 LINK jsoncat 00:02:23.919 LINK poller_perf 00:02:23.919 LINK event_perf 00:02:23.919 LINK vtophys 00:02:23.919 LINK env_dpdk_post_init 00:02:23.919 LINK reactor_perf 00:02:23.919 LINK led 00:02:23.919 LINK interrupt_tgt 00:02:23.919 LINK spdk_trace_record 00:02:23.919 LINK vhost 00:02:23.919 LINK reactor 00:02:23.919 LINK stub 00:02:23.919 LINK zipf 00:02:23.919 LINK boot_partition 00:02:23.919 LINK fused_ordering 00:02:23.919 LINK lsvmd 00:02:23.919 LINK iscsi_tgt 00:02:23.919 LINK spdk_tgt 00:02:23.919 LINK startup 00:02:23.919 LINK app_repeat 00:02:23.919 LINK connect_stress 00:02:23.919 LINK err_injection 00:02:23.919 LINK bdev_svc 00:02:23.919 LINK mkfs 00:02:23.919 LINK reserve 00:02:23.919 LINK hello_sock 00:02:23.919 LINK ioat_perf 00:02:23.919 LINK doorbell_aers 00:02:23.919 LINK hotplug 00:02:24.178 LINK sgl 00:02:24.178 CXX test/cpp_headers/accel_module.o 00:02:24.178 LINK simple_copy 00:02:24.178 LINK scheduler 00:02:24.178 LINK verify 00:02:24.178 LINK nvme_dp 00:02:24.178 LINK hello_blob 00:02:24.178 LINK spdk_dd 00:02:24.178 LINK hello_world 00:02:24.178 LINK reset 00:02:24.178 LINK overhead 00:02:24.178 LINK hello_bdev 00:02:24.178 LINK thread 00:02:24.178 LINK spdk_trace 00:02:24.178 LINK aer 00:02:24.178 LINK nvme_compliance 00:02:24.178 LINK arbitration 00:02:24.178 LINK idxd_perf 00:02:24.178 LINK nvmf 00:02:24.178 CXX test/cpp_headers/assert.o 00:02:24.178 LINK fdp 00:02:24.178 CXX test/cpp_headers/barrier.o 00:02:24.178 CXX test/cpp_headers/base64.o 00:02:24.178 LINK test_dma 00:02:24.178 LINK reconnect 00:02:24.178 CXX test/cpp_headers/bdev.o 00:02:24.178 CXX test/cpp_headers/bdev_module.o 00:02:24.178 CXX test/cpp_headers/bdev_zone.o 00:02:24.178 CXX test/cpp_headers/bit_array.o 00:02:24.178 CXX test/cpp_headers/bit_pool.o 00:02:24.178 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.178 LINK dif 00:02:24.178 CXX test/cpp_headers/blob_bdev.o 00:02:24.178 LINK bdevio 00:02:24.178 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.178 LINK pci_ut 00:02:24.178 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.440 CXX test/cpp_headers/blobfs.o 00:02:24.440 CXX test/cpp_headers/blob.o 00:02:24.440 CXX test/cpp_headers/conf.o 00:02:24.440 CXX test/cpp_headers/config.o 00:02:24.440 CXX test/cpp_headers/cpuset.o 00:02:24.440 CC app/fio/bdev/fio_plugin.o 00:02:24.440 CC examples/nvme/abort/abort.o 00:02:24.440 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.440 CXX test/cpp_headers/crc16.o 00:02:24.440 CXX test/cpp_headers/crc32.o 00:02:24.440 CXX test/cpp_headers/crc64.o 00:02:24.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.440 CXX test/cpp_headers/dif.o 00:02:24.440 CXX test/cpp_headers/dma.o 00:02:24.440 CXX test/cpp_headers/endian.o 00:02:24.440 CXX test/cpp_headers/env_dpdk.o 00:02:24.440 CXX test/cpp_headers/env.o 00:02:24.440 CXX test/cpp_headers/event.o 00:02:24.440 CXX test/cpp_headers/fd_group.o 00:02:24.440 CXX test/cpp_headers/fd.o 00:02:24.440 CXX test/cpp_headers/file.o 00:02:24.440 CXX test/cpp_headers/ftl.o 00:02:24.440 LINK accel_perf 00:02:24.440 CXX test/cpp_headers/gpt_spec.o 00:02:24.440 CXX test/cpp_headers/hexlify.o 00:02:24.440 CXX test/cpp_headers/histogram_data.o 00:02:24.440 CXX test/cpp_headers/idxd.o 00:02:24.440 CXX test/cpp_headers/idxd_spec.o 00:02:24.440 CXX test/cpp_headers/init.o 00:02:24.440 CXX test/cpp_headers/ioat.o 00:02:24.440 LINK nvme_manage 00:02:24.440 CXX test/cpp_headers/ioat_spec.o 00:02:24.440 CXX test/cpp_headers/iscsi_spec.o 00:02:24.440 CXX test/cpp_headers/json.o 00:02:24.440 CXX test/cpp_headers/jsonrpc.o 00:02:24.440 LINK blobcli 00:02:24.440 LINK nvme_fuzz 00:02:24.440 CXX test/cpp_headers/keyring.o 00:02:24.440 CXX test/cpp_headers/keyring_module.o 00:02:24.440 CXX test/cpp_headers/likely.o 00:02:24.440 CXX test/cpp_headers/lvol.o 00:02:24.440 CXX test/cpp_headers/log.o 00:02:24.440 CXX test/cpp_headers/memory.o 00:02:24.440 CXX test/cpp_headers/mmio.o 00:02:24.440 CXX test/cpp_headers/nbd.o 00:02:24.440 CXX test/cpp_headers/notify.o 00:02:24.440 CXX test/cpp_headers/nvme.o 00:02:24.440 CXX test/cpp_headers/nvme_intel.o 00:02:24.440 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.701 CXX test/cpp_headers/nvme_spec.o 00:02:24.701 LINK cmb_copy 00:02:24.701 CXX test/cpp_headers/nvme_zns.o 00:02:24.701 LINK spdk_nvme 00:02:24.701 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.701 CXX test/cpp_headers/nvmf.o 00:02:24.701 CXX test/cpp_headers/nvmf_spec.o 00:02:24.701 CXX test/cpp_headers/nvmf_transport.o 00:02:24.701 CXX test/cpp_headers/opal.o 00:02:24.701 CXX test/cpp_headers/opal_spec.o 00:02:24.701 CXX test/cpp_headers/pci_ids.o 00:02:24.701 CXX test/cpp_headers/pipe.o 00:02:24.701 CXX test/cpp_headers/queue.o 00:02:24.701 LINK mem_callbacks 00:02:24.701 CXX test/cpp_headers/reduce.o 00:02:24.701 CXX test/cpp_headers/rpc.o 00:02:24.701 CXX test/cpp_headers/scheduler.o 00:02:24.701 CXX test/cpp_headers/scsi.o 00:02:24.701 CXX test/cpp_headers/sock.o 00:02:24.701 CXX test/cpp_headers/scsi_spec.o 00:02:24.701 CXX test/cpp_headers/stdinc.o 00:02:24.701 LINK pmr_persistence 00:02:24.701 CXX test/cpp_headers/string.o 00:02:24.701 CXX test/cpp_headers/thread.o 00:02:24.701 CXX test/cpp_headers/trace.o 00:02:24.701 CXX test/cpp_headers/trace_parser.o 00:02:24.701 LINK spdk_nvme_perf 00:02:24.701 CXX test/cpp_headers/tree.o 00:02:24.701 CXX test/cpp_headers/ublk.o 00:02:24.958 CXX test/cpp_headers/util.o 00:02:24.958 CXX test/cpp_headers/uuid.o 00:02:24.958 CXX test/cpp_headers/version.o 00:02:24.958 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.958 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.958 LINK spdk_nvme_identify 00:02:24.958 CXX test/cpp_headers/vhost.o 00:02:24.958 LINK bdevperf 00:02:24.958 CXX test/cpp_headers/vmd.o 00:02:24.958 CXX test/cpp_headers/zipf.o 00:02:24.958 CXX test/cpp_headers/xor.o 00:02:24.958 LINK spdk_top 00:02:24.958 LINK memory_ut 00:02:25.215 LINK abort 00:02:25.215 LINK spdk_bdev 00:02:25.215 LINK vhost_fuzz 00:02:25.215 LINK cuse 00:02:26.151 LINK iscsi_fuzz 00:02:27.527 LINK esnap 00:02:28.095 00:02:28.095 real 0m48.308s 00:02:28.095 user 6m53.138s 00:02:28.095 sys 3m9.260s 00:02:28.095 12:46:05 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:28.095 12:46:05 make -- common/autotest_common.sh@10 -- $ set +x 00:02:28.095 ************************************ 00:02:28.095 END TEST make 00:02:28.095 ************************************ 00:02:28.095 12:46:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:28.096 12:46:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:28.096 12:46:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:28.096 12:46:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.096 12:46:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:28.096 12:46:05 -- pm/common@44 -- $ pid=3373156 00:02:28.096 12:46:05 -- pm/common@50 -- $ kill -TERM 3373156 00:02:28.096 12:46:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.096 12:46:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:28.096 12:46:05 -- pm/common@44 -- $ pid=3373158 00:02:28.096 12:46:05 -- pm/common@50 -- $ kill -TERM 3373158 00:02:28.096 12:46:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.096 12:46:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:28.096 12:46:05 -- pm/common@44 -- $ pid=3373160 00:02:28.096 12:46:05 -- pm/common@50 -- $ kill -TERM 3373160 00:02:28.096 12:46:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.096 12:46:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:28.096 12:46:05 -- pm/common@44 -- $ pid=3373194 00:02:28.096 12:46:05 -- pm/common@50 -- $ sudo -E kill -TERM 3373194 00:02:28.096 12:46:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:28.096 12:46:05 -- nvmf/common.sh@7 -- # uname -s 00:02:28.096 12:46:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.096 12:46:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.096 12:46:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.096 12:46:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.096 12:46:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.096 12:46:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.096 12:46:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.096 12:46:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.096 12:46:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.096 12:46:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.096 12:46:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:02:28.096 12:46:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:02:28.096 12:46:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.096 12:46:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.096 12:46:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:28.096 12:46:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:28.096 12:46:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:28.096 12:46:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.096 12:46:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.096 12:46:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.096 12:46:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.096 12:46:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.096 12:46:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.096 12:46:05 -- paths/export.sh@5 -- # export PATH 00:02:28.096 12:46:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.096 12:46:05 -- nvmf/common.sh@47 -- # : 0 00:02:28.096 12:46:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:28.096 12:46:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:28.096 12:46:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:28.096 12:46:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.096 12:46:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.096 12:46:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:28.096 12:46:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:28.096 12:46:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:28.096 12:46:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.356 12:46:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.356 12:46:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.356 12:46:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.356 12:46:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.356 12:46:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.356 12:46:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:28.356 12:46:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.356 12:46:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.356 12:46:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.356 12:46:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3430020 00:02:28.356 12:46:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.356 12:46:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:28.356 12:46:05 -- pm/common@17 -- # local monitor 00:02:28.356 12:46:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.356 12:46:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.356 12:46:06 -- pm/common@21 -- # date +%s 00:02:28.356 12:46:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.356 12:46:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.356 12:46:06 -- pm/common@21 -- # date +%s 00:02:28.356 12:46:06 -- pm/common@25 -- # sleep 1 00:02:28.356 12:46:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715769966 00:02:28.356 12:46:06 -- pm/common@21 -- # date +%s 00:02:28.356 12:46:06 -- pm/common@21 -- # date +%s 00:02:28.356 12:46:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715769966 00:02:28.356 12:46:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715769966 00:02:28.356 12:46:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715769966 00:02:28.356 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715769966_collect-vmstat.pm.log 00:02:28.356 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715769966_collect-cpu-temp.pm.log 00:02:28.356 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715769966_collect-cpu-load.pm.log 00:02:28.356 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715769966_collect-bmc-pm.bmc.pm.log 00:02:29.294 12:46:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.294 12:46:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:29.294 12:46:07 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:29.294 12:46:07 -- common/autotest_common.sh@10 -- # set +x 00:02:29.294 12:46:07 -- spdk/autotest.sh@59 -- # create_test_list 00:02:29.294 12:46:07 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:29.294 12:46:07 -- common/autotest_common.sh@10 -- # set +x 00:02:29.294 12:46:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:29.294 12:46:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.294 12:46:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.294 12:46:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:29.294 12:46:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.294 12:46:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:29.294 12:46:07 -- common/autotest_common.sh@1451 -- # uname 00:02:29.294 12:46:07 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:29.294 12:46:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:29.294 12:46:07 -- common/autotest_common.sh@1471 -- # uname 00:02:29.294 12:46:07 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:29.294 12:46:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:29.294 12:46:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:29.294 12:46:07 -- spdk/autotest.sh@72 -- # hash lcov 00:02:29.294 12:46:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:29.294 12:46:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:29.294 --rc lcov_branch_coverage=1 00:02:29.294 --rc lcov_function_coverage=1 00:02:29.294 --rc genhtml_branch_coverage=1 00:02:29.294 --rc genhtml_function_coverage=1 00:02:29.294 --rc genhtml_legend=1 00:02:29.294 --rc geninfo_all_blocks=1 00:02:29.294 ' 00:02:29.294 12:46:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:29.294 --rc lcov_branch_coverage=1 00:02:29.294 --rc lcov_function_coverage=1 00:02:29.294 --rc genhtml_branch_coverage=1 00:02:29.294 --rc genhtml_function_coverage=1 00:02:29.294 --rc genhtml_legend=1 00:02:29.294 --rc geninfo_all_blocks=1 00:02:29.294 ' 00:02:29.294 12:46:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:29.294 --rc lcov_branch_coverage=1 00:02:29.294 --rc lcov_function_coverage=1 00:02:29.294 --rc genhtml_branch_coverage=1 00:02:29.294 --rc genhtml_function_coverage=1 00:02:29.294 --rc genhtml_legend=1 00:02:29.294 --rc geninfo_all_blocks=1 00:02:29.294 --no-external' 00:02:29.294 12:46:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:29.294 --rc lcov_branch_coverage=1 00:02:29.294 --rc lcov_function_coverage=1 00:02:29.294 --rc genhtml_branch_coverage=1 00:02:29.294 --rc genhtml_function_coverage=1 00:02:29.294 --rc genhtml_legend=1 00:02:29.294 --rc geninfo_all_blocks=1 00:02:29.294 --no-external' 00:02:29.294 12:46:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:29.294 lcov: LCOV version 1.14 00:02:29.553 12:46:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:39.529 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:51.780 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:51.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:51.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:51.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:52.040 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:52.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:52.040 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:52.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:52.040 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:52.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:53.419 12:46:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.420 12:46:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:53.420 12:46:31 -- common/autotest_common.sh@10 -- # set +x 00:02:53.420 12:46:31 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.420 12:46:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.956 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:55.956 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.956 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.215 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.215 12:46:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:56.215 12:46:34 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:56.215 12:46:34 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:56.215 12:46:34 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:56.215 12:46:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:56.215 12:46:34 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:56.215 12:46:34 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:56.215 12:46:34 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.215 12:46:34 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:56.215 12:46:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:56.215 12:46:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.215 12:46:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:56.215 12:46:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:56.215 12:46:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:56.215 12:46:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.474 No valid GPT data, bailing 00:02:56.474 12:46:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.474 12:46:34 -- scripts/common.sh@391 -- # pt= 00:02:56.474 12:46:34 -- scripts/common.sh@392 -- # return 1 00:02:56.474 12:46:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.474 1+0 records in 00:02:56.474 1+0 records out 00:02:56.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00179901 s, 583 MB/s 00:02:56.474 12:46:34 -- spdk/autotest.sh@118 -- # sync 00:02:56.474 12:46:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.474 12:46:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.474 12:46:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.752 12:46:38 -- spdk/autotest.sh@124 -- # uname -s 00:03:01.752 12:46:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:01.752 12:46:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.752 12:46:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.752 12:46:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.752 12:46:38 -- common/autotest_common.sh@10 -- # set +x 00:03:01.752 ************************************ 00:03:01.752 START TEST setup.sh 00:03:01.752 ************************************ 00:03:01.752 12:46:38 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.752 * Looking for test storage... 00:03:01.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:01.752 12:46:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:01.752 12:46:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.752 12:46:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:01.752 12:46:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:01.752 12:46:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:01.752 12:46:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.752 ************************************ 00:03:01.752 START TEST acl 00:03:01.752 ************************************ 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:01.752 * Looking for test storage... 00:03:01.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.752 12:46:38 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.752 12:46:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:01.752 12:46:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.752 12:46:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.044 12:46:42 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:05.044 12:46:42 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:05.044 12:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.044 12:46:42 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:05.044 12:46:42 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.044 12:46:42 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:07.579 Hugepages 00:03:07.579 node hugesize free / total 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.579 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 00:03:07.580 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.580 12:46:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.580 12:46:45 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.580 12:46:45 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.580 12:46:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.580 ************************************ 00:03:07.580 START TEST denied 00:03:07.580 ************************************ 00:03:07.580 12:46:45 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:07.580 12:46:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:07.580 12:46:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:07.580 12:46:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.580 12:46:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.580 12:46:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:10.114 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.114 12:46:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.306 00:03:14.306 real 0m6.507s 00:03:14.306 user 0m1.881s 00:03:14.306 sys 0m3.770s 00:03:14.306 12:46:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:14.306 12:46:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.306 ************************************ 00:03:14.306 END TEST denied 00:03:14.306 ************************************ 00:03:14.306 12:46:51 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.306 12:46:51 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:14.306 12:46:51 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:14.306 12:46:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.306 ************************************ 00:03:14.306 START TEST allowed 00:03:14.306 ************************************ 00:03:14.306 12:46:51 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:14.306 12:46:51 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:14.306 12:46:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:14.306 12:46:51 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.306 12:46:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.306 12:46:51 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:22.424 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.424 12:46:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:22.424 12:46:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:22.424 12:46:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:22.424 12:46:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.424 12:46:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.958 00:03:24.958 real 0m10.844s 00:03:24.958 user 0m1.935s 00:03:24.958 sys 0m3.894s 00:03:24.958 12:47:02 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.958 12:47:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:24.958 ************************************ 00:03:24.958 END TEST allowed 00:03:24.958 ************************************ 00:03:24.958 00:03:24.958 real 0m23.723s 00:03:24.958 user 0m6.030s 00:03:24.958 sys 0m11.976s 00:03:24.958 12:47:02 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.958 12:47:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.958 ************************************ 00:03:24.958 END TEST acl 00:03:24.958 ************************************ 00:03:24.958 12:47:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.958 12:47:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.958 12:47:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.958 12:47:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.958 ************************************ 00:03:24.958 START TEST hugepages 00:03:24.958 ************************************ 00:03:24.958 12:47:02 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.958 * Looking for test storage... 00:03:24.958 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 71016156 kB' 'MemAvailable: 75515264 kB' 'Buffers: 2696 kB' 'Cached: 14433044 kB' 'SwapCached: 0 kB' 'Active: 10512280 kB' 'Inactive: 4419180 kB' 'Active(anon): 9899612 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498996 kB' 'Mapped: 191208 kB' 'Shmem: 9403892 kB' 'KReclaimable: 241804 kB' 'Slab: 676660 kB' 'SReclaimable: 241804 kB' 'SUnreclaim: 434856 kB' 'KernelStack: 16464 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438232 kB' 'Committed_AS: 11188116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205492 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.958 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.959 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.219 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.220 12:47:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:25.220 12:47:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:25.220 12:47:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:25.220 12:47:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.220 ************************************ 00:03:25.220 START TEST default_setup 00:03:25.220 ************************************ 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.220 12:47:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:28.509 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.509 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.793 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73151740 kB' 'MemAvailable: 77650672 kB' 'Buffers: 2696 kB' 'Cached: 14433172 kB' 'SwapCached: 0 kB' 'Active: 10531324 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918656 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517988 kB' 'Mapped: 191540 kB' 'Shmem: 9404020 kB' 'KReclaimable: 241452 kB' 'Slab: 675184 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433732 kB' 'KernelStack: 16784 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11208280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205780 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.793 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.794 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73153444 kB' 'MemAvailable: 77652376 kB' 'Buffers: 2696 kB' 'Cached: 14433176 kB' 'SwapCached: 0 kB' 'Active: 10531744 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919076 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518376 kB' 'Mapped: 191540 kB' 'Shmem: 9404024 kB' 'KReclaimable: 241452 kB' 'Slab: 675120 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433668 kB' 'KernelStack: 16784 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11208296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205764 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.795 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.796 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.797 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73153080 kB' 'MemAvailable: 77652012 kB' 'Buffers: 2696 kB' 'Cached: 14433176 kB' 'SwapCached: 0 kB' 'Active: 10530212 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917544 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516804 kB' 'Mapped: 191432 kB' 'Shmem: 9404024 kB' 'KReclaimable: 241452 kB' 'Slab: 675088 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433636 kB' 'KernelStack: 16624 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11206848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.798 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.799 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.800 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.801 nr_hugepages=1024 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.801 resv_hugepages=0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.801 surplus_hugepages=0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.801 anon_hugepages=0 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73152648 kB' 'MemAvailable: 77651580 kB' 'Buffers: 2696 kB' 'Cached: 14433216 kB' 'SwapCached: 0 kB' 'Active: 10530148 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917480 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516732 kB' 'Mapped: 191424 kB' 'Shmem: 9404064 kB' 'KReclaimable: 241452 kB' 'Slab: 675088 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433636 kB' 'KernelStack: 16752 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11208340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205796 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.801 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.802 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.803 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33505032 kB' 'MemUsed: 14611956 kB' 'SwapCached: 0 kB' 'Active: 7600568 kB' 'Inactive: 3542112 kB' 'Active(anon): 7408016 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010272 kB' 'Mapped: 127184 kB' 'AnonPages: 135576 kB' 'Shmem: 7275608 kB' 'KernelStack: 9896 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133840 kB' 'Slab: 413608 kB' 'SReclaimable: 133840 kB' 'SUnreclaim: 279768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.804 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.805 node0=1024 expecting 1024 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.805 00:03:33.805 real 0m8.358s 00:03:33.805 user 0m1.326s 00:03:33.805 sys 0m2.226s 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:33.805 12:47:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:33.805 ************************************ 00:03:33.805 END TEST default_setup 00:03:33.805 ************************************ 00:03:33.805 12:47:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:33.805 12:47:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:33.805 12:47:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:33.805 12:47:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.805 ************************************ 00:03:33.806 START TEST per_node_1G_alloc 00:03:33.806 ************************************ 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.806 12:47:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:36.409 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.409 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.409 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73115216 kB' 'MemAvailable: 77614148 kB' 'Buffers: 2696 kB' 'Cached: 14433308 kB' 'SwapCached: 0 kB' 'Active: 10531180 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918512 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517064 kB' 'Mapped: 191404 kB' 'Shmem: 9404156 kB' 'KReclaimable: 241452 kB' 'Slab: 675404 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433952 kB' 'KernelStack: 16608 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11206136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205764 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.409 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.410 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73116524 kB' 'MemAvailable: 77615456 kB' 'Buffers: 2696 kB' 'Cached: 14433312 kB' 'SwapCached: 0 kB' 'Active: 10530868 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918200 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517320 kB' 'Mapped: 191396 kB' 'Shmem: 9404160 kB' 'KReclaimable: 241452 kB' 'Slab: 675416 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433964 kB' 'KernelStack: 16592 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11206152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.411 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.676 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.677 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73116524 kB' 'MemAvailable: 77615456 kB' 'Buffers: 2696 kB' 'Cached: 14433332 kB' 'SwapCached: 0 kB' 'Active: 10530896 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918228 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517324 kB' 'Mapped: 191396 kB' 'Shmem: 9404180 kB' 'KReclaimable: 241452 kB' 'Slab: 675416 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433964 kB' 'KernelStack: 16592 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11206176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.678 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.679 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.680 nr_hugepages=1024 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.680 resv_hugepages=0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.680 surplus_hugepages=0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.680 anon_hugepages=0 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73116956 kB' 'MemAvailable: 77615888 kB' 'Buffers: 2696 kB' 'Cached: 14433372 kB' 'SwapCached: 0 kB' 'Active: 10530532 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917864 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516928 kB' 'Mapped: 191396 kB' 'Shmem: 9404220 kB' 'KReclaimable: 241452 kB' 'Slab: 675416 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433964 kB' 'KernelStack: 16576 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11206196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205684 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.680 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.681 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34528196 kB' 'MemUsed: 13588792 kB' 'SwapCached: 0 kB' 'Active: 7601356 kB' 'Inactive: 3542112 kB' 'Active(anon): 7408804 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010292 kB' 'Mapped: 127132 kB' 'AnonPages: 136360 kB' 'Shmem: 7275628 kB' 'KernelStack: 9912 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133840 kB' 'Slab: 413844 kB' 'SReclaimable: 133840 kB' 'SUnreclaim: 280004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.682 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38588620 kB' 'MemUsed: 5587952 kB' 'SwapCached: 0 kB' 'Active: 2929568 kB' 'Inactive: 877068 kB' 'Active(anon): 2509452 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3425800 kB' 'Mapped: 65272 kB' 'AnonPages: 380968 kB' 'Shmem: 2128616 kB' 'KernelStack: 6696 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107612 kB' 'Slab: 261572 kB' 'SReclaimable: 107612 kB' 'SUnreclaim: 153960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.683 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.684 node0=512 expecting 512 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.684 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.685 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.685 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:36.685 node1=512 expecting 512 00:03:36.685 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.685 00:03:36.685 real 0m3.111s 00:03:36.685 user 0m1.145s 00:03:36.685 sys 0m1.950s 00:03:36.685 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:36.685 12:47:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.685 ************************************ 00:03:36.685 END TEST per_node_1G_alloc 00:03:36.685 ************************************ 00:03:36.685 12:47:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:36.685 12:47:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:36.685 12:47:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:36.685 12:47:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.685 ************************************ 00:03:36.685 START TEST even_2G_alloc 00:03:36.685 ************************************ 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.685 12:47:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:39.980 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.980 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.980 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.980 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:39.980 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.980 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.980 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.980 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73122840 kB' 'MemAvailable: 77621772 kB' 'Buffers: 2696 kB' 'Cached: 14433460 kB' 'SwapCached: 0 kB' 'Active: 10529916 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917248 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516220 kB' 'Mapped: 190700 kB' 'Shmem: 9404308 kB' 'KReclaimable: 241452 kB' 'Slab: 674764 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433312 kB' 'KernelStack: 16544 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11200932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205556 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.981 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73123224 kB' 'MemAvailable: 77622156 kB' 'Buffers: 2696 kB' 'Cached: 14433464 kB' 'SwapCached: 0 kB' 'Active: 10529752 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917084 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516060 kB' 'Mapped: 190664 kB' 'Shmem: 9404312 kB' 'KReclaimable: 241452 kB' 'Slab: 674756 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433304 kB' 'KernelStack: 16608 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11200712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.982 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.983 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73124328 kB' 'MemAvailable: 77623260 kB' 'Buffers: 2696 kB' 'Cached: 14433484 kB' 'SwapCached: 0 kB' 'Active: 10529804 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917136 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516108 kB' 'Mapped: 190684 kB' 'Shmem: 9404332 kB' 'KReclaimable: 241452 kB' 'Slab: 674756 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433304 kB' 'KernelStack: 16672 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11202212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.984 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.985 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.986 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.249 nr_hugepages=1024 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.249 resv_hugepages=0 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.249 surplus_hugepages=0 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.249 anon_hugepages=0 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73124376 kB' 'MemAvailable: 77623308 kB' 'Buffers: 2696 kB' 'Cached: 14433520 kB' 'SwapCached: 0 kB' 'Active: 10529820 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917152 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516024 kB' 'Mapped: 190684 kB' 'Shmem: 9404368 kB' 'KReclaimable: 241452 kB' 'Slab: 674756 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433304 kB' 'KernelStack: 16688 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11202236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205764 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.249 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.250 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34538312 kB' 'MemUsed: 13578676 kB' 'SwapCached: 0 kB' 'Active: 7601856 kB' 'Inactive: 3542112 kB' 'Active(anon): 7409304 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010316 kB' 'Mapped: 126412 kB' 'AnonPages: 136748 kB' 'Shmem: 7275652 kB' 'KernelStack: 9992 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133840 kB' 'Slab: 413360 kB' 'SReclaimable: 133840 kB' 'SUnreclaim: 279520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.251 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38585964 kB' 'MemUsed: 5590608 kB' 'SwapCached: 0 kB' 'Active: 2928092 kB' 'Inactive: 877068 kB' 'Active(anon): 2507976 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3425904 kB' 'Mapped: 64272 kB' 'AnonPages: 379420 kB' 'Shmem: 2128720 kB' 'KernelStack: 6664 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107612 kB' 'Slab: 261396 kB' 'SReclaimable: 107612 kB' 'SUnreclaim: 153784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.252 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.253 node0=512 expecting 512 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.253 node1=512 expecting 512 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.253 00:03:40.253 real 0m3.409s 00:03:40.253 user 0m1.295s 00:03:40.253 sys 0m2.186s 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.253 12:47:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.253 ************************************ 00:03:40.253 END TEST even_2G_alloc 00:03:40.253 ************************************ 00:03:40.253 12:47:17 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:40.254 12:47:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.254 12:47:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.254 12:47:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.254 ************************************ 00:03:40.254 START TEST odd_alloc 00:03:40.254 ************************************ 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.254 12:47:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:43.551 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.551 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.551 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73140096 kB' 'MemAvailable: 77639028 kB' 'Buffers: 2696 kB' 'Cached: 14433604 kB' 'SwapCached: 0 kB' 'Active: 10530320 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917652 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516500 kB' 'Mapped: 190668 kB' 'Shmem: 9404452 kB' 'KReclaimable: 241452 kB' 'Slab: 674120 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 432668 kB' 'KernelStack: 16528 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11200240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.551 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73140132 kB' 'MemAvailable: 77639064 kB' 'Buffers: 2696 kB' 'Cached: 14433608 kB' 'SwapCached: 0 kB' 'Active: 10530080 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917412 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516292 kB' 'Mapped: 190660 kB' 'Shmem: 9404456 kB' 'KReclaimable: 241452 kB' 'Slab: 674116 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 432664 kB' 'KernelStack: 16544 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11200256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.552 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73140800 kB' 'MemAvailable: 77639732 kB' 'Buffers: 2696 kB' 'Cached: 14433624 kB' 'SwapCached: 0 kB' 'Active: 10530104 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917436 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516292 kB' 'Mapped: 190660 kB' 'Shmem: 9404472 kB' 'KReclaimable: 241452 kB' 'Slab: 674116 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 432664 kB' 'KernelStack: 16544 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11200276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.553 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.554 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:43.555 nr_hugepages=1025 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.555 resv_hugepages=0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.555 surplus_hugepages=0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.555 anon_hugepages=0 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73140296 kB' 'MemAvailable: 77639228 kB' 'Buffers: 2696 kB' 'Cached: 14433660 kB' 'SwapCached: 0 kB' 'Active: 10530112 kB' 'Inactive: 4419180 kB' 'Active(anon): 9917444 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516244 kB' 'Mapped: 190660 kB' 'Shmem: 9404508 kB' 'KReclaimable: 241452 kB' 'Slab: 674116 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 432664 kB' 'KernelStack: 16544 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485784 kB' 'Committed_AS: 11200296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.555 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34553628 kB' 'MemUsed: 13563360 kB' 'SwapCached: 0 kB' 'Active: 7600884 kB' 'Inactive: 3542112 kB' 'Active(anon): 7408332 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010324 kB' 'Mapped: 126376 kB' 'AnonPages: 135752 kB' 'Shmem: 7275660 kB' 'KernelStack: 9848 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133840 kB' 'Slab: 412924 kB' 'SReclaimable: 133840 kB' 'SUnreclaim: 279084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.556 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 38584672 kB' 'MemUsed: 5591900 kB' 'SwapCached: 0 kB' 'Active: 2929524 kB' 'Inactive: 877068 kB' 'Active(anon): 2509408 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3426036 kB' 'Mapped: 64284 kB' 'AnonPages: 380732 kB' 'Shmem: 2128852 kB' 'KernelStack: 6712 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107612 kB' 'Slab: 261192 kB' 'SReclaimable: 107612 kB' 'SUnreclaim: 153580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.557 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:43.558 node0=512 expecting 513 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:43.558 node1=513 expecting 512 00:03:43.558 12:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:43.558 00:03:43.558 real 0m3.299s 00:03:43.558 user 0m1.231s 00:03:43.558 sys 0m2.143s 00:03:43.559 12:47:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:43.559 12:47:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.559 ************************************ 00:03:43.559 END TEST odd_alloc 00:03:43.559 ************************************ 00:03:43.559 12:47:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:43.559 12:47:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.559 12:47:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.559 12:47:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.818 ************************************ 00:03:43.818 START TEST custom_alloc 00:03:43.818 ************************************ 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.818 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.819 12:47:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:47.116 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.116 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.116 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.116 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72079728 kB' 'MemAvailable: 76578660 kB' 'Buffers: 2696 kB' 'Cached: 14433760 kB' 'SwapCached: 0 kB' 'Active: 10531736 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919068 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517848 kB' 'Mapped: 190808 kB' 'Shmem: 9404608 kB' 'KReclaimable: 241452 kB' 'Slab: 674704 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433252 kB' 'KernelStack: 16544 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11200656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.117 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.118 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72080148 kB' 'MemAvailable: 76579080 kB' 'Buffers: 2696 kB' 'Cached: 14433764 kB' 'SwapCached: 0 kB' 'Active: 10531300 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918632 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517380 kB' 'Mapped: 190672 kB' 'Shmem: 9404612 kB' 'KReclaimable: 241452 kB' 'Slab: 674712 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433260 kB' 'KernelStack: 16528 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11200672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.119 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.120 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72079392 kB' 'MemAvailable: 76578324 kB' 'Buffers: 2696 kB' 'Cached: 14433764 kB' 'SwapCached: 0 kB' 'Active: 10530944 kB' 'Inactive: 4419180 kB' 'Active(anon): 9918276 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517024 kB' 'Mapped: 190672 kB' 'Shmem: 9404612 kB' 'KReclaimable: 241452 kB' 'Slab: 674712 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433260 kB' 'KernelStack: 16512 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11200692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.121 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.122 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:47.123 nr_hugepages=1536 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.123 resv_hugepages=0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.123 surplus_hugepages=0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.123 anon_hugepages=0 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 72080884 kB' 'MemAvailable: 76579816 kB' 'Buffers: 2696 kB' 'Cached: 14433804 kB' 'SwapCached: 0 kB' 'Active: 10533392 kB' 'Inactive: 4419180 kB' 'Active(anon): 9920724 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519588 kB' 'Mapped: 191176 kB' 'Shmem: 9404652 kB' 'KReclaimable: 241452 kB' 'Slab: 674712 kB' 'SReclaimable: 241452 kB' 'SUnreclaim: 433260 kB' 'KernelStack: 16640 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962520 kB' 'Committed_AS: 11216284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205668 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.123 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.124 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 34549128 kB' 'MemUsed: 13567860 kB' 'SwapCached: 0 kB' 'Active: 7602744 kB' 'Inactive: 3542112 kB' 'Active(anon): 7410192 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010400 kB' 'Mapped: 126368 kB' 'AnonPages: 137696 kB' 'Shmem: 7275736 kB' 'KernelStack: 9960 kB' 'PageTables: 3580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133840 kB' 'Slab: 413200 kB' 'SReclaimable: 133840 kB' 'SUnreclaim: 279360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.125 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.126 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176572 kB' 'MemFree: 37530948 kB' 'MemUsed: 6645624 kB' 'SwapCached: 0 kB' 'Active: 2929192 kB' 'Inactive: 877068 kB' 'Active(anon): 2509076 kB' 'Inactive(anon): 0 kB' 'Active(file): 420116 kB' 'Inactive(file): 877068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3426116 kB' 'Mapped: 64616 kB' 'AnonPages: 380304 kB' 'Shmem: 2128932 kB' 'KernelStack: 6616 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107612 kB' 'Slab: 261496 kB' 'SReclaimable: 107612 kB' 'SUnreclaim: 153884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.127 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.128 node0=512 expecting 512 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:47.128 node1=1024 expecting 1024 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:47.128 00:03:47.128 real 0m3.321s 00:03:47.128 user 0m1.286s 00:03:47.128 sys 0m2.116s 00:03:47.128 12:47:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.129 12:47:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.129 ************************************ 00:03:47.129 END TEST custom_alloc 00:03:47.129 ************************************ 00:03:47.129 12:47:24 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.129 12:47:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.129 12:47:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.129 12:47:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.129 ************************************ 00:03:47.129 START TEST no_shrink_alloc 00:03:47.129 ************************************ 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.129 12:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:49.669 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.669 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.669 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73139636 kB' 'MemAvailable: 77638564 kB' 'Buffers: 2696 kB' 'Cached: 14433908 kB' 'SwapCached: 0 kB' 'Active: 10531972 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919304 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517412 kB' 'Mapped: 190796 kB' 'Shmem: 9404756 kB' 'KReclaimable: 241444 kB' 'Slab: 673868 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432424 kB' 'KernelStack: 16544 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11200880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205652 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73142372 kB' 'MemAvailable: 77641300 kB' 'Buffers: 2696 kB' 'Cached: 14433912 kB' 'SwapCached: 0 kB' 'Active: 10532416 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919748 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517844 kB' 'Mapped: 190740 kB' 'Shmem: 9404760 kB' 'KReclaimable: 241444 kB' 'Slab: 673844 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432400 kB' 'KernelStack: 16512 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11202016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.671 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73140872 kB' 'MemAvailable: 77639800 kB' 'Buffers: 2696 kB' 'Cached: 14433912 kB' 'SwapCached: 0 kB' 'Active: 10531968 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919300 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517828 kB' 'Mapped: 190708 kB' 'Shmem: 9404760 kB' 'KReclaimable: 241444 kB' 'Slab: 673864 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432420 kB' 'KernelStack: 16512 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11203528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.673 nr_hugepages=1024 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.673 resv_hugepages=0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.673 surplus_hugepages=0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.673 anon_hugepages=0 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73141412 kB' 'MemAvailable: 77640340 kB' 'Buffers: 2696 kB' 'Cached: 14433912 kB' 'SwapCached: 0 kB' 'Active: 10532252 kB' 'Inactive: 4419180 kB' 'Active(anon): 9919584 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518224 kB' 'Mapped: 190708 kB' 'Shmem: 9404760 kB' 'KReclaimable: 241444 kB' 'Slab: 673864 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432420 kB' 'KernelStack: 16592 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11203552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205652 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.674 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33515192 kB' 'MemUsed: 14601796 kB' 'SwapCached: 0 kB' 'Active: 7601972 kB' 'Inactive: 3542112 kB' 'Active(anon): 7409420 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010524 kB' 'Mapped: 126404 kB' 'AnonPages: 136764 kB' 'Shmem: 7275860 kB' 'KernelStack: 9880 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133832 kB' 'Slab: 412556 kB' 'SReclaimable: 133832 kB' 'SUnreclaim: 278724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.676 node0=1024 expecting 1024 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.676 12:47:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.208 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.208 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.208 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.472 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73149768 kB' 'MemAvailable: 77648696 kB' 'Buffers: 2696 kB' 'Cached: 14434024 kB' 'SwapCached: 0 kB' 'Active: 10535884 kB' 'Inactive: 4419180 kB' 'Active(anon): 9923216 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521972 kB' 'Mapped: 190808 kB' 'Shmem: 9404872 kB' 'KReclaimable: 241444 kB' 'Slab: 674096 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432652 kB' 'KernelStack: 16640 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11214848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205764 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.473 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73150888 kB' 'MemAvailable: 77649816 kB' 'Buffers: 2696 kB' 'Cached: 14434028 kB' 'SwapCached: 0 kB' 'Active: 10534232 kB' 'Inactive: 4419180 kB' 'Active(anon): 9921564 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520236 kB' 'Mapped: 190720 kB' 'Shmem: 9404876 kB' 'KReclaimable: 241444 kB' 'Slab: 674216 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432772 kB' 'KernelStack: 16704 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11202164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.474 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.475 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73150560 kB' 'MemAvailable: 77649488 kB' 'Buffers: 2696 kB' 'Cached: 14434048 kB' 'SwapCached: 0 kB' 'Active: 10533912 kB' 'Inactive: 4419180 kB' 'Active(anon): 9921244 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519864 kB' 'Mapped: 190720 kB' 'Shmem: 9404896 kB' 'KReclaimable: 241444 kB' 'Slab: 674216 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432772 kB' 'KernelStack: 16720 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11202192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205748 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.476 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.477 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.478 nr_hugepages=1024 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.478 resv_hugepages=0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.478 surplus_hugepages=0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.478 anon_hugepages=0 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293560 kB' 'MemFree: 73149156 kB' 'MemAvailable: 77648084 kB' 'Buffers: 2696 kB' 'Cached: 14434076 kB' 'SwapCached: 0 kB' 'Active: 10533884 kB' 'Inactive: 4419180 kB' 'Active(anon): 9921216 kB' 'Inactive(anon): 0 kB' 'Active(file): 612668 kB' 'Inactive(file): 4419180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520272 kB' 'Mapped: 190720 kB' 'Shmem: 9404924 kB' 'KReclaimable: 241444 kB' 'Slab: 674216 kB' 'SReclaimable: 241444 kB' 'SUnreclaim: 432772 kB' 'KernelStack: 16752 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486808 kB' 'Committed_AS: 11204196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205716 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875412 kB' 'DirectMap2M: 25063424 kB' 'DirectMap1G: 74448896 kB' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.478 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.479 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.480 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116988 kB' 'MemFree: 33507856 kB' 'MemUsed: 14609132 kB' 'SwapCached: 0 kB' 'Active: 7601868 kB' 'Inactive: 3542112 kB' 'Active(anon): 7409316 kB' 'Inactive(anon): 0 kB' 'Active(file): 192552 kB' 'Inactive(file): 3542112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11010604 kB' 'Mapped: 126404 kB' 'AnonPages: 136612 kB' 'Shmem: 7275940 kB' 'KernelStack: 9928 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133832 kB' 'Slab: 413024 kB' 'SReclaimable: 133832 kB' 'SUnreclaim: 279192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.741 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.742 node0=1024 expecting 1024 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.742 00:03:52.742 real 0m5.534s 00:03:52.742 user 0m1.963s 00:03:52.742 sys 0m3.489s 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.742 12:47:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.742 ************************************ 00:03:52.742 END TEST no_shrink_alloc 00:03:52.742 ************************************ 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.742 12:47:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.742 00:03:52.742 real 0m27.709s 00:03:52.742 user 0m8.473s 00:03:52.742 sys 0m14.579s 00:03:52.742 12:47:30 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.742 12:47:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.742 ************************************ 00:03:52.742 END TEST hugepages 00:03:52.742 ************************************ 00:03:52.742 12:47:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:52.742 12:47:30 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.743 12:47:30 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.743 12:47:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.743 ************************************ 00:03:52.743 START TEST driver 00:03:52.743 ************************************ 00:03:52.743 12:47:30 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:52.743 * Looking for test storage... 00:03:52.743 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:52.743 12:47:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:52.743 12:47:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.743 12:47:30 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.017 12:47:35 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:58.017 12:47:35 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:58.017 12:47:35 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:58.017 12:47:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.017 ************************************ 00:03:58.017 START TEST guess_driver 00:03:58.017 ************************************ 00:03:58.017 12:47:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 163 > 0 )) 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:58.018 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:58.018 Looking for driver=vfio-pci 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.018 12:47:35 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.557 12:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.557 12:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.557 12:47:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.833 12:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.833 12:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.833 12:47:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.833 12:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:05.833 12:47:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:05.833 12:47:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.833 12:47:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.027 00:04:10.027 real 0m11.926s 00:04:10.027 user 0m1.980s 00:04:10.027 sys 0m4.243s 00:04:10.027 12:47:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.027 12:47:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.027 ************************************ 00:04:10.027 END TEST guess_driver 00:04:10.027 ************************************ 00:04:10.027 00:04:10.027 real 0m16.550s 00:04:10.027 user 0m3.288s 00:04:10.027 sys 0m6.837s 00:04:10.027 12:47:47 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.027 12:47:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.027 ************************************ 00:04:10.027 END TEST driver 00:04:10.027 ************************************ 00:04:10.027 12:47:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:10.027 12:47:47 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:10.027 12:47:47 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.027 12:47:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.027 ************************************ 00:04:10.027 START TEST devices 00:04:10.027 ************************************ 00:04:10.027 12:47:47 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:10.027 * Looking for test storage... 00:04:10.027 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:10.027 12:47:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.027 12:47:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:10.027 12:47:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.027 12:47:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:12.629 12:47:50 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:12.629 No valid GPT data, bailing 00:04:12.629 12:47:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:12.629 12:47:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:12.629 12:47:50 setup.sh.devices -- setup/common.sh@80 -- # echo 8001563222016 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 8001563222016 >= min_disk_size )) 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.629 12:47:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 ************************************ 00:04:12.629 START TEST nvme_mount 00:04:12.629 ************************************ 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:12.629 12:47:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.010 Creating new GPT entries in memory. 00:04:14.010 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.010 other utilities. 00:04:14.010 12:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.010 12:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.010 12:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.010 12:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.010 12:47:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.951 Creating new GPT entries in memory. 00:04:14.951 The operation has completed successfully. 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3457907 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.951 12:47:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:17.486 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.486 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.486 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.486 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.486 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:17.487 12:47:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:17.487 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.487 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.746 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:17.746 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:04:17.746 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:17.746 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.746 12:47:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:20.282 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.541 12:47:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.834 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:23.835 12:48:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.835 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.835 00:04:23.835 real 0m10.679s 00:04:23.835 user 0m2.765s 00:04:23.835 sys 0m5.530s 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.835 12:48:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.835 ************************************ 00:04:23.835 END TEST nvme_mount 00:04:23.835 ************************************ 00:04:23.835 12:48:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.835 12:48:01 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.835 12:48:01 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.835 12:48:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.835 ************************************ 00:04:23.835 START TEST dm_mount 00:04:23.835 ************************************ 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.835 12:48:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:24.402 Creating new GPT entries in memory. 00:04:24.402 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.402 other utilities. 00:04:24.402 12:48:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.402 12:48:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.402 12:48:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.402 12:48:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.402 12:48:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.779 Creating new GPT entries in memory. 00:04:25.779 The operation has completed successfully. 00:04:25.779 12:48:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.779 12:48:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.779 12:48:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.779 12:48:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.779 12:48:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:26.717 The operation has completed successfully. 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3461600 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.717 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.718 12:48:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.007 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.008 12:48:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.546 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:32.806 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:32.806 00:04:32.806 real 0m9.241s 00:04:32.806 user 0m2.324s 00:04:32.806 sys 0m3.987s 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.806 12:48:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:32.806 ************************************ 00:04:32.806 END TEST dm_mount 00:04:32.806 ************************************ 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.806 12:48:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.066 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.066 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.066 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.066 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.066 12:48:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:33.066 00:04:33.066 real 0m23.681s 00:04:33.066 user 0m6.326s 00:04:33.066 sys 0m11.911s 00:04:33.066 12:48:10 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.066 12:48:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.066 ************************************ 00:04:33.066 END TEST devices 00:04:33.066 ************************************ 00:04:33.066 00:04:33.066 real 1m32.126s 00:04:33.066 user 0m24.283s 00:04:33.066 sys 0m45.619s 00:04:33.066 12:48:10 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.066 12:48:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.066 ************************************ 00:04:33.066 END TEST setup.sh 00:04:33.066 ************************************ 00:04:33.066 12:48:10 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:35.605 Hugepages 00:04:35.605 node hugesize free / total 00:04:35.605 node0 1048576kB 0 / 0 00:04:35.605 node0 2048kB 2048 / 2048 00:04:35.605 node1 1048576kB 0 / 0 00:04:35.605 node1 2048kB 0 / 0 00:04:35.605 00:04:35.605 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.605 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:35.605 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:35.864 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:35.864 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:35.864 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:35.864 12:48:13 -- spdk/autotest.sh@130 -- # uname -s 00:04:35.864 12:48:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:35.864 12:48:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:35.864 12:48:13 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:39.153 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.153 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.153 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.154 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.431 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.431 12:48:21 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:45.000 12:48:22 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:45.000 12:48:22 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:45.000 12:48:22 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.000 12:48:22 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:45.000 12:48:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:45.000 12:48:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:45.000 12:48:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.000 12:48:22 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.000 12:48:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:45.000 12:48:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:45.000 12:48:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:45.000 12:48:22 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.291 Waiting for block devices as requested 00:04:48.292 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:48.292 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:48.292 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:48.550 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:48.550 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:48.551 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:48.809 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:48.809 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:48.809 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:48.809 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:49.067 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:49.067 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:49.067 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:49.389 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:49.389 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:49.389 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:49.389 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:49.649 12:48:27 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:49.649 12:48:27 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1498 -- # grep 0000:5f:00.0/nvme/nvme 00:04:49.649 12:48:27 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:49.649 12:48:27 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:49.649 12:48:27 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:49.649 12:48:27 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:49.649 12:48:27 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:49.649 12:48:27 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:49.649 12:48:27 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:49.649 12:48:27 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:49.649 12:48:27 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:49.649 12:48:27 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:49.649 12:48:27 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:49.649 12:48:27 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:49.649 12:48:27 -- common/autotest_common.sh@1553 -- # continue 00:04:49.649 12:48:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:49.649 12:48:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.649 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:04:49.649 12:48:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:49.649 12:48:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.649 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:04:49.649 12:48:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:52.941 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.941 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.215 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.215 12:48:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.215 12:48:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.215 12:48:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.215 12:48:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.215 12:48:35 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:58.215 12:48:35 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.215 12:48:35 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:58.215 12:48:35 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:58.215 12:48:35 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:58.215 12:48:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:58.215 12:48:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:58.215 12:48:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.215 12:48:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.215 12:48:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:58.215 12:48:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:58.215 12:48:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5f:00.0 00:04:58.215 12:48:35 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:58.215 12:48:35 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:58.215 12:48:35 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:58.215 12:48:35 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:58.215 12:48:35 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:58.215 12:48:35 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5f:00.0 00:04:58.215 12:48:35 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5f:00.0 ]] 00:04:58.215 12:48:35 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3470167 00:04:58.215 12:48:35 -- common/autotest_common.sh@1594 -- # waitforlisten 3470167 00:04:58.215 12:48:35 -- common/autotest_common.sh@827 -- # '[' -z 3470167 ']' 00:04:58.215 12:48:35 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.215 12:48:35 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.215 12:48:35 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.215 12:48:35 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.215 12:48:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.215 12:48:35 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.215 [2024-05-15 12:48:35.620939] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:04:58.215 [2024-05-15 12:48:35.620999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470167 ] 00:04:58.215 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.215 [2024-05-15 12:48:35.692311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.215 [2024-05-15 12:48:35.782484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.784 12:48:36 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.784 12:48:36 -- common/autotest_common.sh@860 -- # return 0 00:04:58.784 12:48:36 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:58.784 12:48:36 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:58.784 12:48:36 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:05:02.075 nvme0n1 00:05:02.075 12:48:39 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:02.075 [2024-05-15 12:48:39.545580] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:02.075 request: 00:05:02.075 { 00:05:02.075 "nvme_ctrlr_name": "nvme0", 00:05:02.075 "password": "test", 00:05:02.075 "method": "bdev_nvme_opal_revert", 00:05:02.075 "req_id": 1 00:05:02.075 } 00:05:02.075 Got JSON-RPC error response 00:05:02.075 response: 00:05:02.075 { 00:05:02.075 "code": -32602, 00:05:02.075 "message": "Invalid parameters" 00:05:02.075 } 00:05:02.075 12:48:39 -- common/autotest_common.sh@1600 -- # true 00:05:02.075 12:48:39 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:02.075 12:48:39 -- common/autotest_common.sh@1604 -- # killprocess 3470167 00:05:02.075 12:48:39 -- common/autotest_common.sh@946 -- # '[' -z 3470167 ']' 00:05:02.075 12:48:39 -- common/autotest_common.sh@950 -- # kill -0 3470167 00:05:02.075 12:48:39 -- common/autotest_common.sh@951 -- # uname 00:05:02.075 12:48:39 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:02.075 12:48:39 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3470167 00:05:02.075 12:48:39 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:02.075 12:48:39 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:02.075 12:48:39 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3470167' 00:05:02.075 killing process with pid 3470167 00:05:02.075 12:48:39 -- common/autotest_common.sh@965 -- # kill 3470167 00:05:02.075 12:48:39 -- common/autotest_common.sh@970 -- # wait 3470167 00:05:10.195 12:48:46 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:10.195 12:48:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:10.195 12:48:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.195 12:48:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.195 12:48:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:10.195 12:48:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.195 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:10.195 12:48:46 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:10.195 12:48:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.195 12:48:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.195 12:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:10.195 ************************************ 00:05:10.195 START TEST env 00:05:10.195 ************************************ 00:05:10.195 12:48:46 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:10.195 * Looking for test storage... 00:05:10.195 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:10.195 12:48:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.195 12:48:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.195 12:48:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.195 12:48:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.195 ************************************ 00:05:10.195 START TEST env_memory 00:05:10.195 ************************************ 00:05:10.195 12:48:47 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.195 00:05:10.195 00:05:10.195 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.195 http://cunit.sourceforge.net/ 00:05:10.195 00:05:10.195 00:05:10.195 Suite: memory 00:05:10.195 Test: alloc and free memory map ...[2024-05-15 12:48:47.098514] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.195 passed 00:05:10.195 Test: mem map translation ...[2024-05-15 12:48:47.117002] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.195 [2024-05-15 12:48:47.117020] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.195 [2024-05-15 12:48:47.117059] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.195 [2024-05-15 12:48:47.117069] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.195 passed 00:05:10.195 Test: mem map registration ...[2024-05-15 12:48:47.153006] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.195 [2024-05-15 12:48:47.153024] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.195 passed 00:05:10.195 Test: mem map adjacent registrations ...passed 00:05:10.195 00:05:10.195 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.195 suites 1 1 n/a 0 0 00:05:10.195 tests 4 4 4 0 0 00:05:10.195 asserts 152 152 152 0 n/a 00:05:10.195 00:05:10.195 Elapsed time = 0.136 seconds 00:05:10.195 00:05:10.195 real 0m0.149s 00:05:10.195 user 0m0.134s 00:05:10.195 sys 0m0.015s 00:05:10.195 12:48:47 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.195 12:48:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.195 ************************************ 00:05:10.195 END TEST env_memory 00:05:10.195 ************************************ 00:05:10.195 12:48:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.195 12:48:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.195 12:48:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.195 12:48:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.195 ************************************ 00:05:10.195 START TEST env_vtophys 00:05:10.195 ************************************ 00:05:10.195 12:48:47 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.195 EAL: lib.eal log level changed from notice to debug 00:05:10.195 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.195 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.195 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.195 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.195 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.195 EAL: Detected lcore 5 as core 8 on socket 0 00:05:10.195 EAL: Detected lcore 6 as core 9 on socket 0 00:05:10.195 EAL: Detected lcore 7 as core 10 on socket 0 00:05:10.195 EAL: Detected lcore 8 as core 11 on socket 0 00:05:10.195 EAL: Detected lcore 9 as core 16 on socket 0 00:05:10.195 EAL: Detected lcore 10 as core 17 on socket 0 00:05:10.195 EAL: Detected lcore 11 as core 18 on socket 0 00:05:10.195 EAL: Detected lcore 12 as core 19 on socket 0 00:05:10.195 EAL: Detected lcore 13 as core 20 on socket 0 00:05:10.195 EAL: Detected lcore 14 as core 24 on socket 0 00:05:10.195 EAL: Detected lcore 15 as core 25 on socket 0 00:05:10.195 EAL: Detected lcore 16 as core 26 on socket 0 00:05:10.195 EAL: Detected lcore 17 as core 27 on socket 0 00:05:10.195 EAL: Detected lcore 18 as core 0 on socket 1 00:05:10.195 EAL: Detected lcore 19 as core 1 on socket 1 00:05:10.195 EAL: Detected lcore 20 as core 2 on socket 1 00:05:10.195 EAL: Detected lcore 21 as core 3 on socket 1 00:05:10.195 EAL: Detected lcore 22 as core 4 on socket 1 00:05:10.195 EAL: Detected lcore 23 as core 8 on socket 1 00:05:10.195 EAL: Detected lcore 24 as core 9 on socket 1 00:05:10.195 EAL: Detected lcore 25 as core 10 on socket 1 00:05:10.195 EAL: Detected lcore 26 as core 11 on socket 1 00:05:10.195 EAL: Detected lcore 27 as core 16 on socket 1 00:05:10.195 EAL: Detected lcore 28 as core 17 on socket 1 00:05:10.195 EAL: Detected lcore 29 as core 18 on socket 1 00:05:10.195 EAL: Detected lcore 30 as core 19 on socket 1 00:05:10.195 EAL: Detected lcore 31 as core 20 on socket 1 00:05:10.195 EAL: Detected lcore 32 as core 24 on socket 1 00:05:10.195 EAL: Detected lcore 33 as core 25 on socket 1 00:05:10.195 EAL: Detected lcore 34 as core 26 on socket 1 00:05:10.195 EAL: Detected lcore 35 as core 27 on socket 1 00:05:10.195 EAL: Detected lcore 36 as core 0 on socket 0 00:05:10.195 EAL: Detected lcore 37 as core 1 on socket 0 00:05:10.195 EAL: Detected lcore 38 as core 2 on socket 0 00:05:10.195 EAL: Detected lcore 39 as core 3 on socket 0 00:05:10.195 EAL: Detected lcore 40 as core 4 on socket 0 00:05:10.195 EAL: Detected lcore 41 as core 8 on socket 0 00:05:10.195 EAL: Detected lcore 42 as core 9 on socket 0 00:05:10.195 EAL: Detected lcore 43 as core 10 on socket 0 00:05:10.195 EAL: Detected lcore 44 as core 11 on socket 0 00:05:10.195 EAL: Detected lcore 45 as core 16 on socket 0 00:05:10.195 EAL: Detected lcore 46 as core 17 on socket 0 00:05:10.195 EAL: Detected lcore 47 as core 18 on socket 0 00:05:10.195 EAL: Detected lcore 48 as core 19 on socket 0 00:05:10.195 EAL: Detected lcore 49 as core 20 on socket 0 00:05:10.195 EAL: Detected lcore 50 as core 24 on socket 0 00:05:10.195 EAL: Detected lcore 51 as core 25 on socket 0 00:05:10.195 EAL: Detected lcore 52 as core 26 on socket 0 00:05:10.195 EAL: Detected lcore 53 as core 27 on socket 0 00:05:10.195 EAL: Detected lcore 54 as core 0 on socket 1 00:05:10.195 EAL: Detected lcore 55 as core 1 on socket 1 00:05:10.195 EAL: Detected lcore 56 as core 2 on socket 1 00:05:10.195 EAL: Detected lcore 57 as core 3 on socket 1 00:05:10.195 EAL: Detected lcore 58 as core 4 on socket 1 00:05:10.195 EAL: Detected lcore 59 as core 8 on socket 1 00:05:10.195 EAL: Detected lcore 60 as core 9 on socket 1 00:05:10.195 EAL: Detected lcore 61 as core 10 on socket 1 00:05:10.195 EAL: Detected lcore 62 as core 11 on socket 1 00:05:10.195 EAL: Detected lcore 63 as core 16 on socket 1 00:05:10.195 EAL: Detected lcore 64 as core 17 on socket 1 00:05:10.195 EAL: Detected lcore 65 as core 18 on socket 1 00:05:10.195 EAL: Detected lcore 66 as core 19 on socket 1 00:05:10.195 EAL: Detected lcore 67 as core 20 on socket 1 00:05:10.195 EAL: Detected lcore 68 as core 24 on socket 1 00:05:10.195 EAL: Detected lcore 69 as core 25 on socket 1 00:05:10.195 EAL: Detected lcore 70 as core 26 on socket 1 00:05:10.195 EAL: Detected lcore 71 as core 27 on socket 1 00:05:10.195 EAL: Maximum logical cores by configuration: 128 00:05:10.195 EAL: Detected CPU lcores: 72 00:05:10.195 EAL: Detected NUMA nodes: 2 00:05:10.195 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:10.195 EAL: Detected shared linkage of DPDK 00:05:10.195 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.195 EAL: Bus pci wants IOVA as 'DC' 00:05:10.195 EAL: Buses did not request a specific IOVA mode. 00:05:10.195 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.195 EAL: Selected IOVA mode 'VA' 00:05:10.195 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.195 EAL: Probing VFIO support... 00:05:10.195 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.195 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.195 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.195 EAL: VFIO support initialized 00:05:10.195 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.195 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.195 EAL: Setting up physically contiguous memory... 00:05:10.195 EAL: Setting maximum number of open files to 524288 00:05:10.195 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.195 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.195 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.195 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.195 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.195 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.195 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.195 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.195 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.195 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.195 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.195 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.195 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.195 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.195 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.195 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.195 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.195 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.195 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.195 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.195 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.195 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.195 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.195 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.195 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.195 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.196 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.196 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.196 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.196 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.196 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.196 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.196 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.196 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.196 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.196 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.196 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.196 EAL: Hugepages will be freed exactly as allocated. 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: TSC frequency is ~2300000 KHz 00:05:10.196 EAL: Main lcore 0 is ready (tid=7f3dbb71ca00;cpuset=[0]) 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 0 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.196 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.196 00:05:10.196 00:05:10.196 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.196 http://cunit.sourceforge.net/ 00:05:10.196 00:05:10.196 00:05:10.196 Suite: components_suite 00:05:10.196 Test: vtophys_malloc_test ...passed 00:05:10.196 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.196 EAL: Restoring previous memory policy: 4 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.196 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.196 EAL: request: mp_malloc_sync 00:05:10.196 EAL: No shared files mode enabled, IPC is disabled 00:05:10.196 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.196 EAL: Trying to obtain current memory policy. 00:05:10.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.455 EAL: Restoring previous memory policy: 4 00:05:10.455 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.455 EAL: request: mp_malloc_sync 00:05:10.455 EAL: No shared files mode enabled, IPC is disabled 00:05:10.455 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.714 EAL: request: mp_malloc_sync 00:05:10.714 EAL: No shared files mode enabled, IPC is disabled 00:05:10.714 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.714 passed 00:05:10.714 00:05:10.714 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.714 suites 1 1 n/a 0 0 00:05:10.714 tests 2 2 2 0 0 00:05:10.714 asserts 497 497 497 0 n/a 00:05:10.714 00:05:10.714 Elapsed time = 1.120 seconds 00:05:10.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.714 EAL: request: mp_malloc_sync 00:05:10.714 EAL: No shared files mode enabled, IPC is disabled 00:05:10.714 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.714 EAL: No shared files mode enabled, IPC is disabled 00:05:10.714 EAL: No shared files mode enabled, IPC is disabled 00:05:10.714 EAL: No shared files mode enabled, IPC is disabled 00:05:10.714 00:05:10.714 real 0m1.257s 00:05:10.714 user 0m0.724s 00:05:10.714 sys 0m0.499s 00:05:10.714 12:48:48 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.714 12:48:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.714 ************************************ 00:05:10.714 END TEST env_vtophys 00:05:10.714 ************************************ 00:05:10.714 12:48:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.714 12:48:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.714 12:48:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.714 12:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.973 ************************************ 00:05:10.973 START TEST env_pci 00:05:10.973 ************************************ 00:05:10.973 12:48:48 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.973 00:05:10.973 00:05:10.973 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.973 http://cunit.sourceforge.net/ 00:05:10.973 00:05:10.973 00:05:10.973 Suite: pci 00:05:10.973 Test: pci_hook ...[2024-05-15 12:48:48.656685] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3471923 has claimed it 00:05:10.973 EAL: Cannot find device (10000:00:01.0) 00:05:10.973 EAL: Failed to attach device on primary process 00:05:10.973 passed 00:05:10.973 00:05:10.973 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.973 suites 1 1 n/a 0 0 00:05:10.973 tests 1 1 1 0 0 00:05:10.973 asserts 25 25 25 0 n/a 00:05:10.973 00:05:10.973 Elapsed time = 0.034 seconds 00:05:10.973 00:05:10.973 real 0m0.058s 00:05:10.973 user 0m0.020s 00:05:10.973 sys 0m0.038s 00:05:10.973 12:48:48 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.973 12:48:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:10.973 ************************************ 00:05:10.973 END TEST env_pci 00:05:10.973 ************************************ 00:05:10.973 12:48:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.973 12:48:48 env -- env/env.sh@15 -- # uname 00:05:10.973 12:48:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.973 12:48:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.973 12:48:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.973 12:48:48 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:10.973 12:48:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.973 12:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.973 ************************************ 00:05:10.973 START TEST env_dpdk_post_init 00:05:10.973 ************************************ 00:05:10.973 12:48:48 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.973 EAL: Detected CPU lcores: 72 00:05:10.973 EAL: Detected NUMA nodes: 2 00:05:10.973 EAL: Detected shared linkage of DPDK 00:05:10.973 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.233 EAL: Selected IOVA mode 'VA' 00:05:11.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.233 EAL: VFIO support initialized 00:05:11.233 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.233 EAL: Using IOMMU type 1 (Type 1) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:11.233 EAL: Ignore mapping IO port bar(1) 00:05:11.233 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:12.171 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:12.171 EAL: Ignore mapping IO port bar(1) 00:05:12.171 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:22.150 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:22.150 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:22.150 Starting DPDK initialization... 00:05:22.150 Starting SPDK post initialization... 00:05:22.150 SPDK NVMe probe 00:05:22.150 Attaching to 0000:5f:00.0 00:05:22.150 Attached to 0000:5f:00.0 00:05:22.150 Cleaning up... 00:05:22.150 00:05:22.150 real 0m9.933s 00:05:22.150 user 0m7.784s 00:05:22.150 sys 0m1.199s 00:05:22.150 12:48:58 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.150 12:48:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 END TEST env_dpdk_post_init 00:05:22.150 ************************************ 00:05:22.150 12:48:58 env -- env/env.sh@26 -- # uname 00:05:22.150 12:48:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.150 12:48:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.150 12:48:58 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.150 12:48:58 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.150 12:48:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 START TEST env_mem_callbacks 00:05:22.150 ************************************ 00:05:22.150 12:48:58 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.150 EAL: Detected CPU lcores: 72 00:05:22.150 EAL: Detected NUMA nodes: 2 00:05:22.150 EAL: Detected shared linkage of DPDK 00:05:22.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.150 EAL: Selected IOVA mode 'VA' 00:05:22.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.150 EAL: VFIO support initialized 00:05:22.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.150 00:05:22.150 00:05:22.150 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.150 http://cunit.sourceforge.net/ 00:05:22.150 00:05:22.150 00:05:22.150 Suite: memory 00:05:22.150 Test: test ... 00:05:22.150 register 0x200000200000 2097152 00:05:22.150 malloc 3145728 00:05:22.150 register 0x200000400000 4194304 00:05:22.150 buf 0x200000500000 len 3145728 PASSED 00:05:22.150 malloc 64 00:05:22.150 buf 0x2000004fff40 len 64 PASSED 00:05:22.150 malloc 4194304 00:05:22.150 register 0x200000800000 6291456 00:05:22.150 buf 0x200000a00000 len 4194304 PASSED 00:05:22.150 free 0x200000500000 3145728 00:05:22.150 free 0x2000004fff40 64 00:05:22.150 unregister 0x200000400000 4194304 PASSED 00:05:22.150 free 0x200000a00000 4194304 00:05:22.150 unregister 0x200000800000 6291456 PASSED 00:05:22.150 malloc 8388608 00:05:22.150 register 0x200000400000 10485760 00:05:22.150 buf 0x200000600000 len 8388608 PASSED 00:05:22.150 free 0x200000600000 8388608 00:05:22.150 unregister 0x200000400000 10485760 PASSED 00:05:22.150 passed 00:05:22.150 00:05:22.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.150 suites 1 1 n/a 0 0 00:05:22.150 tests 1 1 1 0 0 00:05:22.150 asserts 15 15 15 0 n/a 00:05:22.150 00:05:22.150 Elapsed time = 0.006 seconds 00:05:22.150 00:05:22.150 real 0m0.066s 00:05:22.150 user 0m0.017s 00:05:22.150 sys 0m0.048s 00:05:22.150 12:48:58 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.150 12:48:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 END TEST env_mem_callbacks 00:05:22.150 ************************************ 00:05:22.150 00:05:22.150 real 0m12.027s 00:05:22.150 user 0m8.863s 00:05:22.150 sys 0m2.196s 00:05:22.150 12:48:58 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.150 12:48:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 END TEST env 00:05:22.150 ************************************ 00:05:22.150 12:48:58 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:22.150 12:48:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.150 12:48:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.150 12:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 START TEST rpc 00:05:22.150 ************************************ 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:22.150 * Looking for test storage... 00:05:22.150 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3473492 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3473492 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@827 -- # '[' -z 3473492 ']' 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 [2024-05-15 12:48:59.183611] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:22.150 [2024-05-15 12:48:59.183678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473492 ] 00:05:22.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.150 [2024-05-15 12:48:59.254704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.150 [2024-05-15 12:48:59.337246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:22.150 [2024-05-15 12:48:59.337293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3473492' to capture a snapshot of events at runtime. 00:05:22.150 [2024-05-15 12:48:59.337304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:22.150 [2024-05-15 12:48:59.337312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:22.150 [2024-05-15 12:48:59.337319] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3473492 for offline analysis/debug. 00:05:22.150 [2024-05-15 12:48:59.337360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:22.150 12:48:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.150 12:48:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 ************************************ 00:05:22.411 START TEST rpc_integrity 00:05:22.411 ************************************ 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.411 { 00:05:22.411 "name": "Malloc0", 00:05:22.411 "aliases": [ 00:05:22.411 "126c03d2-93fe-4965-8183-713b751c544a" 00:05:22.411 ], 00:05:22.411 "product_name": "Malloc disk", 00:05:22.411 "block_size": 512, 00:05:22.411 "num_blocks": 16384, 00:05:22.411 "uuid": "126c03d2-93fe-4965-8183-713b751c544a", 00:05:22.411 "assigned_rate_limits": { 00:05:22.411 "rw_ios_per_sec": 0, 00:05:22.411 "rw_mbytes_per_sec": 0, 00:05:22.411 "r_mbytes_per_sec": 0, 00:05:22.411 "w_mbytes_per_sec": 0 00:05:22.411 }, 00:05:22.411 "claimed": false, 00:05:22.411 "zoned": false, 00:05:22.411 "supported_io_types": { 00:05:22.411 "read": true, 00:05:22.411 "write": true, 00:05:22.411 "unmap": true, 00:05:22.411 "write_zeroes": true, 00:05:22.411 "flush": true, 00:05:22.411 "reset": true, 00:05:22.411 "compare": false, 00:05:22.411 "compare_and_write": false, 00:05:22.411 "abort": true, 00:05:22.411 "nvme_admin": false, 00:05:22.411 "nvme_io": false 00:05:22.411 }, 00:05:22.411 "memory_domains": [ 00:05:22.411 { 00:05:22.411 "dma_device_id": "system", 00:05:22.411 "dma_device_type": 1 00:05:22.411 }, 00:05:22.411 { 00:05:22.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.411 "dma_device_type": 2 00:05:22.411 } 00:05:22.411 ], 00:05:22.411 "driver_specific": {} 00:05:22.411 } 00:05:22.411 ]' 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 [2024-05-15 12:49:00.157348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.411 [2024-05-15 12:49:00.157387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.411 [2024-05-15 12:49:00.157401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa220e0 00:05:22.411 [2024-05-15 12:49:00.157409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.411 [2024-05-15 12:49:00.158528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.411 [2024-05-15 12:49:00.158553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.411 Passthru0 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.411 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.411 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.411 { 00:05:22.411 "name": "Malloc0", 00:05:22.411 "aliases": [ 00:05:22.411 "126c03d2-93fe-4965-8183-713b751c544a" 00:05:22.411 ], 00:05:22.412 "product_name": "Malloc disk", 00:05:22.412 "block_size": 512, 00:05:22.412 "num_blocks": 16384, 00:05:22.412 "uuid": "126c03d2-93fe-4965-8183-713b751c544a", 00:05:22.412 "assigned_rate_limits": { 00:05:22.412 "rw_ios_per_sec": 0, 00:05:22.412 "rw_mbytes_per_sec": 0, 00:05:22.412 "r_mbytes_per_sec": 0, 00:05:22.412 "w_mbytes_per_sec": 0 00:05:22.412 }, 00:05:22.412 "claimed": true, 00:05:22.412 "claim_type": "exclusive_write", 00:05:22.412 "zoned": false, 00:05:22.412 "supported_io_types": { 00:05:22.412 "read": true, 00:05:22.412 "write": true, 00:05:22.412 "unmap": true, 00:05:22.412 "write_zeroes": true, 00:05:22.412 "flush": true, 00:05:22.412 "reset": true, 00:05:22.412 "compare": false, 00:05:22.412 "compare_and_write": false, 00:05:22.412 "abort": true, 00:05:22.412 "nvme_admin": false, 00:05:22.412 "nvme_io": false 00:05:22.412 }, 00:05:22.412 "memory_domains": [ 00:05:22.412 { 00:05:22.412 "dma_device_id": "system", 00:05:22.412 "dma_device_type": 1 00:05:22.412 }, 00:05:22.412 { 00:05:22.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.412 "dma_device_type": 2 00:05:22.412 } 00:05:22.412 ], 00:05:22.412 "driver_specific": {} 00:05:22.412 }, 00:05:22.412 { 00:05:22.412 "name": "Passthru0", 00:05:22.412 "aliases": [ 00:05:22.412 "33b53d9f-0d23-503a-a985-2c85db7f18e7" 00:05:22.412 ], 00:05:22.412 "product_name": "passthru", 00:05:22.412 "block_size": 512, 00:05:22.412 "num_blocks": 16384, 00:05:22.412 "uuid": "33b53d9f-0d23-503a-a985-2c85db7f18e7", 00:05:22.412 "assigned_rate_limits": { 00:05:22.412 "rw_ios_per_sec": 0, 00:05:22.412 "rw_mbytes_per_sec": 0, 00:05:22.412 "r_mbytes_per_sec": 0, 00:05:22.412 "w_mbytes_per_sec": 0 00:05:22.412 }, 00:05:22.412 "claimed": false, 00:05:22.412 "zoned": false, 00:05:22.412 "supported_io_types": { 00:05:22.412 "read": true, 00:05:22.412 "write": true, 00:05:22.412 "unmap": true, 00:05:22.412 "write_zeroes": true, 00:05:22.412 "flush": true, 00:05:22.412 "reset": true, 00:05:22.412 "compare": false, 00:05:22.412 "compare_and_write": false, 00:05:22.412 "abort": true, 00:05:22.412 "nvme_admin": false, 00:05:22.412 "nvme_io": false 00:05:22.412 }, 00:05:22.412 "memory_domains": [ 00:05:22.412 { 00:05:22.412 "dma_device_id": "system", 00:05:22.412 "dma_device_type": 1 00:05:22.412 }, 00:05:22.412 { 00:05:22.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.412 "dma_device_type": 2 00:05:22.412 } 00:05:22.412 ], 00:05:22.412 "driver_specific": { 00:05:22.412 "passthru": { 00:05:22.412 "name": "Passthru0", 00:05:22.412 "base_bdev_name": "Malloc0" 00:05:22.412 } 00:05:22.412 } 00:05:22.412 } 00:05:22.412 ]' 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.412 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.412 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.672 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.672 00:05:22.672 real 0m0.268s 00:05:22.672 user 0m0.171s 00:05:22.672 sys 0m0.043s 00:05:22.672 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 ************************************ 00:05:22.672 END TEST rpc_integrity 00:05:22.672 ************************************ 00:05:22.672 12:49:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:22.672 12:49:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.672 12:49:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.672 12:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 ************************************ 00:05:22.672 START TEST rpc_plugins 00:05:22.672 ************************************ 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:22.672 { 00:05:22.672 "name": "Malloc1", 00:05:22.672 "aliases": [ 00:05:22.672 "52c787a7-19a1-41a9-a44f-ad2c69cd375b" 00:05:22.672 ], 00:05:22.672 "product_name": "Malloc disk", 00:05:22.672 "block_size": 4096, 00:05:22.672 "num_blocks": 256, 00:05:22.672 "uuid": "52c787a7-19a1-41a9-a44f-ad2c69cd375b", 00:05:22.672 "assigned_rate_limits": { 00:05:22.672 "rw_ios_per_sec": 0, 00:05:22.672 "rw_mbytes_per_sec": 0, 00:05:22.672 "r_mbytes_per_sec": 0, 00:05:22.672 "w_mbytes_per_sec": 0 00:05:22.672 }, 00:05:22.672 "claimed": false, 00:05:22.672 "zoned": false, 00:05:22.672 "supported_io_types": { 00:05:22.672 "read": true, 00:05:22.672 "write": true, 00:05:22.672 "unmap": true, 00:05:22.672 "write_zeroes": true, 00:05:22.672 "flush": true, 00:05:22.672 "reset": true, 00:05:22.672 "compare": false, 00:05:22.672 "compare_and_write": false, 00:05:22.672 "abort": true, 00:05:22.672 "nvme_admin": false, 00:05:22.672 "nvme_io": false 00:05:22.672 }, 00:05:22.672 "memory_domains": [ 00:05:22.672 { 00:05:22.672 "dma_device_id": "system", 00:05:22.672 "dma_device_type": 1 00:05:22.672 }, 00:05:22.672 { 00:05:22.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.672 "dma_device_type": 2 00:05:22.672 } 00:05:22.672 ], 00:05:22.672 "driver_specific": {} 00:05:22.672 } 00:05:22.672 ]' 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:22.672 12:49:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:22.672 00:05:22.672 real 0m0.141s 00:05:22.672 user 0m0.087s 00:05:22.672 sys 0m0.025s 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.672 12:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.672 ************************************ 00:05:22.672 END TEST rpc_plugins 00:05:22.672 ************************************ 00:05:22.931 12:49:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:22.931 12:49:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.931 12:49:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.931 12:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.931 ************************************ 00:05:22.931 START TEST rpc_trace_cmd_test 00:05:22.931 ************************************ 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.931 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3473492", 00:05:22.931 "tpoint_group_mask": "0x8", 00:05:22.931 "iscsi_conn": { 00:05:22.931 "mask": "0x2", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "scsi": { 00:05:22.931 "mask": "0x4", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "bdev": { 00:05:22.931 "mask": "0x8", 00:05:22.931 "tpoint_mask": "0xffffffffffffffff" 00:05:22.931 }, 00:05:22.931 "nvmf_rdma": { 00:05:22.931 "mask": "0x10", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "nvmf_tcp": { 00:05:22.931 "mask": "0x20", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "ftl": { 00:05:22.931 "mask": "0x40", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "blobfs": { 00:05:22.931 "mask": "0x80", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "dsa": { 00:05:22.931 "mask": "0x200", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "thread": { 00:05:22.931 "mask": "0x400", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "nvme_pcie": { 00:05:22.931 "mask": "0x800", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "iaa": { 00:05:22.931 "mask": "0x1000", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "nvme_tcp": { 00:05:22.931 "mask": "0x2000", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "bdev_nvme": { 00:05:22.931 "mask": "0x4000", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 }, 00:05:22.931 "sock": { 00:05:22.931 "mask": "0x8000", 00:05:22.931 "tpoint_mask": "0x0" 00:05:22.931 } 00:05:22.931 }' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.931 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:23.191 12:49:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:23.191 00:05:23.191 real 0m0.199s 00:05:23.191 user 0m0.168s 00:05:23.191 sys 0m0.024s 00:05:23.191 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.191 12:49:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 ************************************ 00:05:23.191 END TEST rpc_trace_cmd_test 00:05:23.191 ************************************ 00:05:23.191 12:49:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:23.191 12:49:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:23.191 12:49:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:23.191 12:49:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.191 12:49:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.191 12:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 ************************************ 00:05:23.191 START TEST rpc_daemon_integrity 00:05:23.191 ************************************ 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.191 { 00:05:23.191 "name": "Malloc2", 00:05:23.191 "aliases": [ 00:05:23.191 "e49b1abe-a7c0-4b66-a986-9ed422f39fc8" 00:05:23.191 ], 00:05:23.191 "product_name": "Malloc disk", 00:05:23.191 "block_size": 512, 00:05:23.191 "num_blocks": 16384, 00:05:23.191 "uuid": "e49b1abe-a7c0-4b66-a986-9ed422f39fc8", 00:05:23.191 "assigned_rate_limits": { 00:05:23.191 "rw_ios_per_sec": 0, 00:05:23.191 "rw_mbytes_per_sec": 0, 00:05:23.191 "r_mbytes_per_sec": 0, 00:05:23.191 "w_mbytes_per_sec": 0 00:05:23.191 }, 00:05:23.191 "claimed": false, 00:05:23.191 "zoned": false, 00:05:23.191 "supported_io_types": { 00:05:23.191 "read": true, 00:05:23.191 "write": true, 00:05:23.191 "unmap": true, 00:05:23.191 "write_zeroes": true, 00:05:23.191 "flush": true, 00:05:23.191 "reset": true, 00:05:23.191 "compare": false, 00:05:23.191 "compare_and_write": false, 00:05:23.191 "abort": true, 00:05:23.191 "nvme_admin": false, 00:05:23.191 "nvme_io": false 00:05:23.191 }, 00:05:23.191 "memory_domains": [ 00:05:23.191 { 00:05:23.191 "dma_device_id": "system", 00:05:23.191 "dma_device_type": 1 00:05:23.191 }, 00:05:23.191 { 00:05:23.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.191 "dma_device_type": 2 00:05:23.191 } 00:05:23.191 ], 00:05:23.191 "driver_specific": {} 00:05:23.191 } 00:05:23.191 ]' 00:05:23.191 12:49:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 [2024-05-15 12:49:01.035736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:23.191 [2024-05-15 12:49:01.035771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.191 [2024-05-15 12:49:01.035786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa23610 00:05:23.191 [2024-05-15 12:49:01.035794] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.191 [2024-05-15 12:49:01.036741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.191 [2024-05-15 12:49:01.036767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.191 Passthru0 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.191 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.191 { 00:05:23.191 "name": "Malloc2", 00:05:23.191 "aliases": [ 00:05:23.191 "e49b1abe-a7c0-4b66-a986-9ed422f39fc8" 00:05:23.191 ], 00:05:23.191 "product_name": "Malloc disk", 00:05:23.191 "block_size": 512, 00:05:23.191 "num_blocks": 16384, 00:05:23.191 "uuid": "e49b1abe-a7c0-4b66-a986-9ed422f39fc8", 00:05:23.191 "assigned_rate_limits": { 00:05:23.191 "rw_ios_per_sec": 0, 00:05:23.191 "rw_mbytes_per_sec": 0, 00:05:23.191 "r_mbytes_per_sec": 0, 00:05:23.191 "w_mbytes_per_sec": 0 00:05:23.191 }, 00:05:23.191 "claimed": true, 00:05:23.192 "claim_type": "exclusive_write", 00:05:23.192 "zoned": false, 00:05:23.192 "supported_io_types": { 00:05:23.192 "read": true, 00:05:23.192 "write": true, 00:05:23.192 "unmap": true, 00:05:23.192 "write_zeroes": true, 00:05:23.192 "flush": true, 00:05:23.192 "reset": true, 00:05:23.192 "compare": false, 00:05:23.192 "compare_and_write": false, 00:05:23.192 "abort": true, 00:05:23.192 "nvme_admin": false, 00:05:23.192 "nvme_io": false 00:05:23.192 }, 00:05:23.192 "memory_domains": [ 00:05:23.192 { 00:05:23.192 "dma_device_id": "system", 00:05:23.192 "dma_device_type": 1 00:05:23.192 }, 00:05:23.192 { 00:05:23.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.192 "dma_device_type": 2 00:05:23.192 } 00:05:23.192 ], 00:05:23.192 "driver_specific": {} 00:05:23.192 }, 00:05:23.192 { 00:05:23.192 "name": "Passthru0", 00:05:23.192 "aliases": [ 00:05:23.192 "bff1b3d4-9472-5135-8e78-0cb1903d48d8" 00:05:23.192 ], 00:05:23.192 "product_name": "passthru", 00:05:23.192 "block_size": 512, 00:05:23.192 "num_blocks": 16384, 00:05:23.192 "uuid": "bff1b3d4-9472-5135-8e78-0cb1903d48d8", 00:05:23.192 "assigned_rate_limits": { 00:05:23.192 "rw_ios_per_sec": 0, 00:05:23.192 "rw_mbytes_per_sec": 0, 00:05:23.192 "r_mbytes_per_sec": 0, 00:05:23.192 "w_mbytes_per_sec": 0 00:05:23.192 }, 00:05:23.192 "claimed": false, 00:05:23.192 "zoned": false, 00:05:23.192 "supported_io_types": { 00:05:23.192 "read": true, 00:05:23.192 "write": true, 00:05:23.192 "unmap": true, 00:05:23.192 "write_zeroes": true, 00:05:23.192 "flush": true, 00:05:23.192 "reset": true, 00:05:23.192 "compare": false, 00:05:23.192 "compare_and_write": false, 00:05:23.192 "abort": true, 00:05:23.192 "nvme_admin": false, 00:05:23.192 "nvme_io": false 00:05:23.192 }, 00:05:23.192 "memory_domains": [ 00:05:23.192 { 00:05:23.192 "dma_device_id": "system", 00:05:23.192 "dma_device_type": 1 00:05:23.192 }, 00:05:23.192 { 00:05:23.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.192 "dma_device_type": 2 00:05:23.192 } 00:05:23.192 ], 00:05:23.192 "driver_specific": { 00:05:23.192 "passthru": { 00:05:23.192 "name": "Passthru0", 00:05:23.192 "base_bdev_name": "Malloc2" 00:05:23.192 } 00:05:23.192 } 00:05:23.192 } 00:05:23.192 ]' 00:05:23.192 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:23.452 00:05:23.452 real 0m0.252s 00:05:23.452 user 0m0.154s 00:05:23.452 sys 0m0.046s 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.452 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 END TEST rpc_daemon_integrity 00:05:23.452 ************************************ 00:05:23.452 12:49:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:23.452 12:49:01 rpc -- rpc/rpc.sh@84 -- # killprocess 3473492 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@946 -- # '[' -z 3473492 ']' 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@950 -- # kill -0 3473492 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@951 -- # uname 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3473492 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3473492' 00:05:23.452 killing process with pid 3473492 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@965 -- # kill 3473492 00:05:23.452 12:49:01 rpc -- common/autotest_common.sh@970 -- # wait 3473492 00:05:24.019 00:05:24.019 real 0m2.605s 00:05:24.019 user 0m3.244s 00:05:24.019 sys 0m0.826s 00:05:24.019 12:49:01 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.019 12:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.019 ************************************ 00:05:24.019 END TEST rpc 00:05:24.019 ************************************ 00:05:24.019 12:49:01 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:24.019 12:49:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.019 12:49:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.019 12:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:24.019 ************************************ 00:05:24.019 START TEST skip_rpc 00:05:24.019 ************************************ 00:05:24.019 12:49:01 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:24.019 * Looking for test storage... 00:05:24.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:24.019 12:49:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:24.019 12:49:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:24.019 12:49:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:24.019 12:49:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.019 12:49:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.019 12:49:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.019 ************************************ 00:05:24.019 START TEST skip_rpc 00:05:24.019 ************************************ 00:05:24.019 12:49:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:24.019 12:49:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3474038 00:05:24.019 12:49:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.019 12:49:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:24.019 12:49:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:24.278 [2024-05-15 12:49:01.912535] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:24.278 [2024-05-15 12:49:01.912589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474038 ] 00:05:24.278 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.278 [2024-05-15 12:49:01.983943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.278 [2024-05-15 12:49:02.072621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3474038 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3474038 ']' 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3474038 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3474038 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3474038' 00:05:29.556 killing process with pid 3474038 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3474038 00:05:29.556 12:49:06 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3474038 00:05:29.556 00:05:29.556 real 0m5.449s 00:05:29.556 user 0m5.170s 00:05:29.556 sys 0m0.310s 00:05:29.556 12:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.556 12:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.556 ************************************ 00:05:29.556 END TEST skip_rpc 00:05:29.556 ************************************ 00:05:29.556 12:49:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:29.556 12:49:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.556 12:49:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.556 12:49:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.556 ************************************ 00:05:29.556 START TEST skip_rpc_with_json 00:05:29.556 ************************************ 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3474791 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3474791 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3474791 ']' 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.556 12:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.556 [2024-05-15 12:49:07.434681] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:29.556 [2024-05-15 12:49:07.434732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474791 ] 00:05:29.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.815 [2024-05-15 12:49:07.505054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.815 [2024-05-15 12:49:07.595702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.384 [2024-05-15 12:49:08.251335] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:30.384 request: 00:05:30.384 { 00:05:30.384 "trtype": "tcp", 00:05:30.384 "method": "nvmf_get_transports", 00:05:30.384 "req_id": 1 00:05:30.384 } 00:05:30.384 Got JSON-RPC error response 00:05:30.384 response: 00:05:30.384 { 00:05:30.384 "code": -19, 00:05:30.384 "message": "No such device" 00:05:30.384 } 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.384 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.384 [2024-05-15 12:49:08.263430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.644 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:30.644 { 00:05:30.644 "subsystems": [ 00:05:30.644 { 00:05:30.644 "subsystem": "keyring", 00:05:30.644 "config": [] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "iobuf", 00:05:30.644 "config": [ 00:05:30.644 { 00:05:30.644 "method": "iobuf_set_options", 00:05:30.644 "params": { 00:05:30.644 "small_pool_count": 8192, 00:05:30.644 "large_pool_count": 1024, 00:05:30.644 "small_bufsize": 8192, 00:05:30.644 "large_bufsize": 135168 00:05:30.644 } 00:05:30.644 } 00:05:30.644 ] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "sock", 00:05:30.644 "config": [ 00:05:30.644 { 00:05:30.644 "method": "sock_impl_set_options", 00:05:30.644 "params": { 00:05:30.644 "impl_name": "posix", 00:05:30.644 "recv_buf_size": 2097152, 00:05:30.644 "send_buf_size": 2097152, 00:05:30.644 "enable_recv_pipe": true, 00:05:30.644 "enable_quickack": false, 00:05:30.644 "enable_placement_id": 0, 00:05:30.644 "enable_zerocopy_send_server": true, 00:05:30.644 "enable_zerocopy_send_client": false, 00:05:30.644 "zerocopy_threshold": 0, 00:05:30.644 "tls_version": 0, 00:05:30.644 "enable_ktls": false 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "sock_impl_set_options", 00:05:30.644 "params": { 00:05:30.644 "impl_name": "ssl", 00:05:30.644 "recv_buf_size": 4096, 00:05:30.644 "send_buf_size": 4096, 00:05:30.644 "enable_recv_pipe": true, 00:05:30.644 "enable_quickack": false, 00:05:30.644 "enable_placement_id": 0, 00:05:30.644 "enable_zerocopy_send_server": true, 00:05:30.644 "enable_zerocopy_send_client": false, 00:05:30.644 "zerocopy_threshold": 0, 00:05:30.644 "tls_version": 0, 00:05:30.644 "enable_ktls": false 00:05:30.644 } 00:05:30.644 } 00:05:30.644 ] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "vmd", 00:05:30.644 "config": [] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "accel", 00:05:30.644 "config": [ 00:05:30.644 { 00:05:30.644 "method": "accel_set_options", 00:05:30.644 "params": { 00:05:30.644 "small_cache_size": 128, 00:05:30.644 "large_cache_size": 16, 00:05:30.644 "task_count": 2048, 00:05:30.644 "sequence_count": 2048, 00:05:30.644 "buf_count": 2048 00:05:30.644 } 00:05:30.644 } 00:05:30.644 ] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "bdev", 00:05:30.644 "config": [ 00:05:30.644 { 00:05:30.644 "method": "bdev_set_options", 00:05:30.644 "params": { 00:05:30.644 "bdev_io_pool_size": 65535, 00:05:30.644 "bdev_io_cache_size": 256, 00:05:30.644 "bdev_auto_examine": true, 00:05:30.644 "iobuf_small_cache_size": 128, 00:05:30.644 "iobuf_large_cache_size": 16 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "bdev_raid_set_options", 00:05:30.644 "params": { 00:05:30.644 "process_window_size_kb": 1024 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "bdev_iscsi_set_options", 00:05:30.644 "params": { 00:05:30.644 "timeout_sec": 30 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "bdev_nvme_set_options", 00:05:30.644 "params": { 00:05:30.644 "action_on_timeout": "none", 00:05:30.644 "timeout_us": 0, 00:05:30.644 "timeout_admin_us": 0, 00:05:30.644 "keep_alive_timeout_ms": 10000, 00:05:30.644 "arbitration_burst": 0, 00:05:30.644 "low_priority_weight": 0, 00:05:30.644 "medium_priority_weight": 0, 00:05:30.644 "high_priority_weight": 0, 00:05:30.644 "nvme_adminq_poll_period_us": 10000, 00:05:30.644 "nvme_ioq_poll_period_us": 0, 00:05:30.644 "io_queue_requests": 0, 00:05:30.644 "delay_cmd_submit": true, 00:05:30.644 "transport_retry_count": 4, 00:05:30.644 "bdev_retry_count": 3, 00:05:30.644 "transport_ack_timeout": 0, 00:05:30.644 "ctrlr_loss_timeout_sec": 0, 00:05:30.644 "reconnect_delay_sec": 0, 00:05:30.644 "fast_io_fail_timeout_sec": 0, 00:05:30.644 "disable_auto_failback": false, 00:05:30.644 "generate_uuids": false, 00:05:30.644 "transport_tos": 0, 00:05:30.644 "nvme_error_stat": false, 00:05:30.644 "rdma_srq_size": 0, 00:05:30.644 "io_path_stat": false, 00:05:30.644 "allow_accel_sequence": false, 00:05:30.644 "rdma_max_cq_size": 0, 00:05:30.644 "rdma_cm_event_timeout_ms": 0, 00:05:30.644 "dhchap_digests": [ 00:05:30.644 "sha256", 00:05:30.644 "sha384", 00:05:30.644 "sha512" 00:05:30.644 ], 00:05:30.644 "dhchap_dhgroups": [ 00:05:30.644 "null", 00:05:30.644 "ffdhe2048", 00:05:30.644 "ffdhe3072", 00:05:30.644 "ffdhe4096", 00:05:30.644 "ffdhe6144", 00:05:30.644 "ffdhe8192" 00:05:30.644 ] 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "bdev_nvme_set_hotplug", 00:05:30.644 "params": { 00:05:30.644 "period_us": 100000, 00:05:30.644 "enable": false 00:05:30.644 } 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "method": "bdev_wait_for_examine" 00:05:30.644 } 00:05:30.644 ] 00:05:30.644 }, 00:05:30.644 { 00:05:30.644 "subsystem": "scsi", 00:05:30.645 "config": null 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "scheduler", 00:05:30.645 "config": [ 00:05:30.645 { 00:05:30.645 "method": "framework_set_scheduler", 00:05:30.645 "params": { 00:05:30.645 "name": "static" 00:05:30.645 } 00:05:30.645 } 00:05:30.645 ] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "vhost_scsi", 00:05:30.645 "config": [] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "vhost_blk", 00:05:30.645 "config": [] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "ublk", 00:05:30.645 "config": [] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "nbd", 00:05:30.645 "config": [] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "nvmf", 00:05:30.645 "config": [ 00:05:30.645 { 00:05:30.645 "method": "nvmf_set_config", 00:05:30.645 "params": { 00:05:30.645 "discovery_filter": "match_any", 00:05:30.645 "admin_cmd_passthru": { 00:05:30.645 "identify_ctrlr": false 00:05:30.645 } 00:05:30.645 } 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "method": "nvmf_set_max_subsystems", 00:05:30.645 "params": { 00:05:30.645 "max_subsystems": 1024 00:05:30.645 } 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "method": "nvmf_set_crdt", 00:05:30.645 "params": { 00:05:30.645 "crdt1": 0, 00:05:30.645 "crdt2": 0, 00:05:30.645 "crdt3": 0 00:05:30.645 } 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "method": "nvmf_create_transport", 00:05:30.645 "params": { 00:05:30.645 "trtype": "TCP", 00:05:30.645 "max_queue_depth": 128, 00:05:30.645 "max_io_qpairs_per_ctrlr": 127, 00:05:30.645 "in_capsule_data_size": 4096, 00:05:30.645 "max_io_size": 131072, 00:05:30.645 "io_unit_size": 131072, 00:05:30.645 "max_aq_depth": 128, 00:05:30.645 "num_shared_buffers": 511, 00:05:30.645 "buf_cache_size": 4294967295, 00:05:30.645 "dif_insert_or_strip": false, 00:05:30.645 "zcopy": false, 00:05:30.645 "c2h_success": true, 00:05:30.645 "sock_priority": 0, 00:05:30.645 "abort_timeout_sec": 1, 00:05:30.645 "ack_timeout": 0, 00:05:30.645 "data_wr_pool_size": 0 00:05:30.645 } 00:05:30.645 } 00:05:30.645 ] 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "subsystem": "iscsi", 00:05:30.645 "config": [ 00:05:30.645 { 00:05:30.645 "method": "iscsi_set_options", 00:05:30.645 "params": { 00:05:30.645 "node_base": "iqn.2016-06.io.spdk", 00:05:30.645 "max_sessions": 128, 00:05:30.645 "max_connections_per_session": 2, 00:05:30.645 "max_queue_depth": 64, 00:05:30.645 "default_time2wait": 2, 00:05:30.645 "default_time2retain": 20, 00:05:30.645 "first_burst_length": 8192, 00:05:30.645 "immediate_data": true, 00:05:30.645 "allow_duplicated_isid": false, 00:05:30.645 "error_recovery_level": 0, 00:05:30.645 "nop_timeout": 60, 00:05:30.645 "nop_in_interval": 30, 00:05:30.645 "disable_chap": false, 00:05:30.645 "require_chap": false, 00:05:30.645 "mutual_chap": false, 00:05:30.645 "chap_group": 0, 00:05:30.645 "max_large_datain_per_connection": 64, 00:05:30.645 "max_r2t_per_connection": 4, 00:05:30.645 "pdu_pool_size": 36864, 00:05:30.645 "immediate_data_pool_size": 16384, 00:05:30.645 "data_out_pool_size": 2048 00:05:30.645 } 00:05:30.645 } 00:05:30.645 ] 00:05:30.645 } 00:05:30.645 ] 00:05:30.645 } 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3474791 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3474791 ']' 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3474791 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3474791 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3474791' 00:05:30.645 killing process with pid 3474791 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3474791 00:05:30.645 12:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3474791 00:05:31.216 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3474982 00:05:31.216 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:31.216 12:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:36.567 12:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3474982 00:05:36.567 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3474982 ']' 00:05:36.567 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3474982 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3474982 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3474982' 00:05:36.568 killing process with pid 3474982 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3474982 00:05:36.568 12:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3474982 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:36.568 00:05:36.568 real 0m6.858s 00:05:36.568 user 0m6.606s 00:05:36.568 sys 0m0.662s 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 ************************************ 00:05:36.568 END TEST skip_rpc_with_json 00:05:36.568 ************************************ 00:05:36.568 12:49:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:36.568 12:49:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.568 12:49:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.568 12:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 ************************************ 00:05:36.568 START TEST skip_rpc_with_delay 00:05:36.568 ************************************ 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.568 [2024-05-15 12:49:14.374886] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:36.568 [2024-05-15 12:49:14.374960] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.568 00:05:36.568 real 0m0.051s 00:05:36.568 user 0m0.031s 00:05:36.568 sys 0m0.019s 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.568 12:49:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 ************************************ 00:05:36.568 END TEST skip_rpc_with_delay 00:05:36.568 ************************************ 00:05:36.832 12:49:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:36.832 12:49:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:36.832 12:49:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:36.832 12:49:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.832 12:49:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.832 12:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.832 ************************************ 00:05:36.832 START TEST exit_on_failed_rpc_init 00:05:36.832 ************************************ 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3475769 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3475769 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3475769 ']' 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.832 12:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.832 [2024-05-15 12:49:14.516346] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:36.832 [2024-05-15 12:49:14.516392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475769 ] 00:05:36.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.832 [2024-05-15 12:49:14.581323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.832 [2024-05-15 12:49:14.672044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.769 [2024-05-15 12:49:15.378760] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:37.769 [2024-05-15 12:49:15.378820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475864 ] 00:05:37.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.769 [2024-05-15 12:49:15.446866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.769 [2024-05-15 12:49:15.531626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.769 [2024-05-15 12:49:15.531703] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:37.769 [2024-05-15 12:49:15.531714] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:37.769 [2024-05-15 12:49:15.531723] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3475769 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3475769 ']' 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3475769 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.769 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3475769 00:05:38.028 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.028 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.028 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3475769' 00:05:38.028 killing process with pid 3475769 00:05:38.028 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3475769 00:05:38.028 12:49:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3475769 00:05:38.286 00:05:38.286 real 0m1.568s 00:05:38.286 user 0m1.801s 00:05:38.286 sys 0m0.448s 00:05:38.286 12:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.286 12:49:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.287 ************************************ 00:05:38.287 END TEST exit_on_failed_rpc_init 00:05:38.287 ************************************ 00:05:38.287 12:49:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:38.287 00:05:38.287 real 0m14.375s 00:05:38.287 user 0m13.781s 00:05:38.287 sys 0m1.725s 00:05:38.287 12:49:16 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.287 12:49:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.287 ************************************ 00:05:38.287 END TEST skip_rpc 00:05:38.287 ************************************ 00:05:38.287 12:49:16 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:38.287 12:49:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.287 12:49:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.287 12:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.546 ************************************ 00:05:38.546 START TEST rpc_client 00:05:38.546 ************************************ 00:05:38.546 12:49:16 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:38.546 * Looking for test storage... 00:05:38.546 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:38.546 12:49:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:38.546 OK 00:05:38.546 12:49:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:38.546 00:05:38.546 real 0m0.132s 00:05:38.546 user 0m0.052s 00:05:38.546 sys 0m0.088s 00:05:38.546 12:49:16 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.546 12:49:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:38.546 ************************************ 00:05:38.546 END TEST rpc_client 00:05:38.546 ************************************ 00:05:38.546 12:49:16 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:38.546 12:49:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.546 12:49:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.546 12:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.546 ************************************ 00:05:38.546 START TEST json_config 00:05:38.546 ************************************ 00:05:38.546 12:49:16 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:38.805 12:49:16 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.805 12:49:16 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.805 12:49:16 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.805 12:49:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.805 12:49:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.805 12:49:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.805 12:49:16 json_config -- paths/export.sh@5 -- # export PATH 00:05:38.805 12:49:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@47 -- # : 0 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.805 12:49:16 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:38.805 INFO: JSON configuration test init 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.805 12:49:16 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:38.805 12:49:16 json_config -- json_config/common.sh@9 -- # local app=target 00:05:38.805 12:49:16 json_config -- json_config/common.sh@10 -- # shift 00:05:38.805 12:49:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.805 12:49:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.805 12:49:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.805 12:49:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.805 12:49:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.805 12:49:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3476077 00:05:38.805 12:49:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.805 Waiting for target to run... 00:05:38.805 12:49:16 json_config -- json_config/common.sh@25 -- # waitforlisten 3476077 /var/tmp/spdk_tgt.sock 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@827 -- # '[' -z 3476077 ']' 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.805 12:49:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.805 12:49:16 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.806 12:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.806 [2024-05-15 12:49:16.549395] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:38.806 [2024-05-15 12:49:16.549461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476077 ] 00:05:38.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.064 [2024-05-15 12:49:16.855649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.064 [2024-05-15 12:49:16.930867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.631 12:49:17 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.631 12:49:17 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:39.631 12:49:17 json_config -- json_config/common.sh@26 -- # echo '' 00:05:39.631 00:05:39.631 12:49:17 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:39.631 12:49:17 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:39.631 12:49:17 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.631 12:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.631 12:49:17 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:39.632 12:49:17 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:39.632 12:49:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.632 12:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.632 12:49:17 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:39.632 12:49:17 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:39.632 12:49:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:42.921 12:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:42.921 12:49:20 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:42.921 12:49:20 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:42.921 12:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:49.500 12:49:26 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:49.501 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:49.501 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:49.501 Found net devices under 0000:18:00.0: mlx_0_0 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:49.501 Found net devices under 0000:18:00.1: mlx_0_1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@58 -- # uname 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:49.501 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:49.501 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:05:49.501 altname enp24s0f0np0 00:05:49.501 altname ens785f0np0 00:05:49.501 inet 192.168.100.8/24 scope global mlx_0_0 00:05:49.501 valid_lft forever preferred_lft forever 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:49.501 12:49:26 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:49.501 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:49.501 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:05:49.501 altname enp24s0f1np1 00:05:49.501 altname ens785f1np1 00:05:49.501 inet 192.168.100.9/24 scope global mlx_0_1 00:05:49.501 valid_lft forever preferred_lft forever 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@422 -- # return 0 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:49.501 192.168.100.9' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:49.501 192.168.100.9' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:49.501 192.168.100.9' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:49.501 12:49:27 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:49.501 12:49:27 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:49.501 12:49:27 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.501 12:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.501 MallocForNvmf0 00:05:49.501 12:49:27 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.501 12:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.760 MallocForNvmf1 00:05:49.760 12:49:27 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:49.760 12:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:49.760 [2024-05-15 12:49:27.612365] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:49.760 [2024-05-15 12:49:27.641107] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed72c0/0x2004180) succeed. 00:05:50.019 [2024-05-15 12:49:27.653409] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed94b0/0x1ee4040) succeed. 00:05:50.019 12:49:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.019 12:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.019 12:49:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.019 12:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.278 12:49:28 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.278 12:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.537 12:49:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:50.537 12:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:50.537 [2024-05-15 12:49:28.322475] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:50.537 [2024-05-15 12:49:28.322890] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:50.537 12:49:28 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:50.537 12:49:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.537 12:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.537 12:49:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:50.537 12:49:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.537 12:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.796 12:49:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:50.796 12:49:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.796 12:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.796 MallocBdevForConfigChangeCheck 00:05:50.796 12:49:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:50.796 12:49:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.796 12:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.796 12:49:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:50.796 12:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.363 12:49:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:51.363 INFO: shutting down applications... 00:05:51.363 12:49:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:51.363 12:49:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:51.363 12:49:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:51.363 12:49:28 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:59.486 Calling clear_iscsi_subsystem 00:05:59.486 Calling clear_nvmf_subsystem 00:05:59.486 Calling clear_nbd_subsystem 00:05:59.486 Calling clear_ublk_subsystem 00:05:59.486 Calling clear_vhost_blk_subsystem 00:05:59.486 Calling clear_vhost_scsi_subsystem 00:05:59.486 Calling clear_bdev_subsystem 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@345 -- # break 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:59.486 12:49:36 json_config -- json_config/common.sh@31 -- # local app=target 00:05:59.486 12:49:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@35 -- # [[ -n 3476077 ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3476077 00:05:59.486 [2024-05-15 12:49:36.406404] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:59.486 12:49:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.486 12:49:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.486 12:49:36 json_config -- json_config/common.sh@41 -- # kill -0 3476077 00:05:59.486 12:49:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.486 [2024-05-15 12:49:36.521299] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:59.486 12:49:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.486 12:49:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.486 12:49:36 json_config -- json_config/common.sh@41 -- # kill -0 3476077 00:05:59.486 12:49:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.486 12:49:36 json_config -- json_config/common.sh@43 -- # break 00:05:59.486 12:49:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.486 SPDK target shutdown done 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:59.486 INFO: relaunching applications... 00:05:59.486 12:49:36 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.486 12:49:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:59.486 12:49:36 json_config -- json_config/common.sh@10 -- # shift 00:05:59.486 12:49:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.486 12:49:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.486 12:49:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3480861 00:05:59.486 12:49:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.486 12:49:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.486 Waiting for target to run... 00:05:59.486 12:49:36 json_config -- json_config/common.sh@25 -- # waitforlisten 3480861 /var/tmp/spdk_tgt.sock 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@827 -- # '[' -z 3480861 ']' 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.486 12:49:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.486 [2024-05-15 12:49:36.958202] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:05:59.486 [2024-05-15 12:49:36.958263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3480861 ] 00:05:59.486 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.486 [2024-05-15 12:49:37.249768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.486 [2024-05-15 12:49:37.325042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.775 [2024-05-15 12:49:40.350709] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15f49b0/0x147a440) succeed. 00:06:02.775 [2024-05-15 12:49:40.361603] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15f1a10/0x14da420) succeed. 00:06:02.775 [2024-05-15 12:49:40.411743] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:02.775 [2024-05-15 12:49:40.412071] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:02.775 12:49:40 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.775 12:49:40 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:02.775 12:49:40 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.775 00:06:02.775 12:49:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:02.775 12:49:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:02.775 INFO: Checking if target configuration is the same... 00:06:02.775 12:49:40 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.775 12:49:40 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:02.775 12:49:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.775 + '[' 2 -ne 2 ']' 00:06:02.775 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:02.775 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:02.775 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:02.775 +++ basename /dev/fd/62 00:06:02.775 ++ mktemp /tmp/62.XXX 00:06:02.775 + tmp_file_1=/tmp/62.C1e 00:06:02.775 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.775 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:02.775 + tmp_file_2=/tmp/spdk_tgt_config.json.Q1A 00:06:02.775 + ret=0 00:06:02.775 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.034 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.034 + diff -u /tmp/62.C1e /tmp/spdk_tgt_config.json.Q1A 00:06:03.034 + echo 'INFO: JSON config files are the same' 00:06:03.034 INFO: JSON config files are the same 00:06:03.034 + rm /tmp/62.C1e /tmp/spdk_tgt_config.json.Q1A 00:06:03.034 + exit 0 00:06:03.034 12:49:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:03.034 12:49:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:03.034 INFO: changing configuration and checking if this can be detected... 00:06:03.034 12:49:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.034 12:49:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.294 12:49:40 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.294 12:49:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:03.294 12:49:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.294 + '[' 2 -ne 2 ']' 00:06:03.294 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.294 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:03.294 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:03.294 +++ basename /dev/fd/62 00:06:03.294 ++ mktemp /tmp/62.XXX 00:06:03.294 + tmp_file_1=/tmp/62.F4V 00:06:03.294 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.294 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.294 + tmp_file_2=/tmp/spdk_tgt_config.json.VXX 00:06:03.294 + ret=0 00:06:03.294 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.555 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.555 + diff -u /tmp/62.F4V /tmp/spdk_tgt_config.json.VXX 00:06:03.555 + ret=1 00:06:03.555 + echo '=== Start of file: /tmp/62.F4V ===' 00:06:03.555 + cat /tmp/62.F4V 00:06:03.555 + echo '=== End of file: /tmp/62.F4V ===' 00:06:03.555 + echo '' 00:06:03.555 + echo '=== Start of file: /tmp/spdk_tgt_config.json.VXX ===' 00:06:03.555 + cat /tmp/spdk_tgt_config.json.VXX 00:06:03.555 + echo '=== End of file: /tmp/spdk_tgt_config.json.VXX ===' 00:06:03.555 + echo '' 00:06:03.555 + rm /tmp/62.F4V /tmp/spdk_tgt_config.json.VXX 00:06:03.555 + exit 1 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:03.555 INFO: configuration change detected. 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 3480861 ]] 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.555 12:49:41 json_config -- json_config/json_config.sh@323 -- # killprocess 3480861 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@946 -- # '[' -z 3480861 ']' 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@950 -- # kill -0 3480861 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@951 -- # uname 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.555 12:49:41 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3480861 00:06:03.814 12:49:41 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.814 12:49:41 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.814 12:49:41 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3480861' 00:06:03.814 killing process with pid 3480861 00:06:03.814 12:49:41 json_config -- common/autotest_common.sh@965 -- # kill 3480861 00:06:03.814 [2024-05-15 12:49:41.470400] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:03.814 12:49:41 json_config -- common/autotest_common.sh@970 -- # wait 3480861 00:06:03.814 [2024-05-15 12:49:41.583767] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:06:11.939 12:49:48 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.939 12:49:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:11.939 12:49:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.939 12:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 12:49:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:11.939 12:49:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:11.939 INFO: Success 00:06:11.939 12:49:48 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@117 -- # sync 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:11.939 12:49:48 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:11.939 00:06:11.939 real 0m32.276s 00:06:11.939 user 0m34.741s 00:06:11.939 sys 0m6.770s 00:06:11.939 12:49:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.939 12:49:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 ************************************ 00:06:11.939 END TEST json_config 00:06:11.939 ************************************ 00:06:11.939 12:49:48 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.939 12:49:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.939 12:49:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.939 12:49:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.939 ************************************ 00:06:11.939 START TEST json_config_extra_key 00:06:11.939 ************************************ 00:06:11.939 12:49:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:11.940 12:49:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.940 12:49:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.940 12:49:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.940 12:49:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.940 12:49:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.940 12:49:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.940 12:49:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.940 12:49:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.940 12:49:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.940 INFO: launching applications... 00:06:11.940 12:49:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3482610 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.940 Waiting for target to run... 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3482610 /var/tmp/spdk_tgt.sock 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3482610 ']' 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.940 12:49:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.940 12:49:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.940 [2024-05-15 12:49:48.878715] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:11.940 [2024-05-15 12:49:48.878778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482610 ] 00:06:11.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.940 [2024-05-15 12:49:49.163925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.940 [2024-05-15 12:49:49.238227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.940 12:49:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.940 12:49:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.940 00:06:11.940 12:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.940 INFO: shutting down applications... 00:06:11.940 12:49:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3482610 ]] 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3482610 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3482610 00:06:11.940 12:49:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3482610 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.508 12:49:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.508 SPDK target shutdown done 00:06:12.508 12:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.508 Success 00:06:12.508 00:06:12.508 real 0m1.424s 00:06:12.508 user 0m1.246s 00:06:12.508 sys 0m0.399s 00:06:12.508 12:49:50 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.508 12:49:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.508 ************************************ 00:06:12.508 END TEST json_config_extra_key 00:06:12.508 ************************************ 00:06:12.508 12:49:50 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.508 12:49:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.508 12:49:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.508 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:06:12.508 ************************************ 00:06:12.508 START TEST alias_rpc 00:06:12.508 ************************************ 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.508 * Looking for test storage... 00:06:12.508 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:12.508 12:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.508 12:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3482842 00:06:12.508 12:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.508 12:49:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3482842 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3482842 ']' 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.508 12:49:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.767 [2024-05-15 12:49:50.414181] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:12.767 [2024-05-15 12:49:50.414244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482842 ] 00:06:12.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.767 [2024-05-15 12:49:50.485355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.767 [2024-05-15 12:49:50.565553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.703 12:49:51 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.703 12:49:51 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:13.703 12:49:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:13.703 12:49:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3482842 00:06:13.703 12:49:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3482842 ']' 00:06:13.703 12:49:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3482842 00:06:13.703 12:49:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482842 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482842' 00:06:13.704 killing process with pid 3482842 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 3482842 00:06:13.704 12:49:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 3482842 00:06:14.272 00:06:14.272 real 0m1.590s 00:06:14.272 user 0m1.683s 00:06:14.272 sys 0m0.457s 00:06:14.272 12:49:51 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.272 12:49:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.272 ************************************ 00:06:14.272 END TEST alias_rpc 00:06:14.272 ************************************ 00:06:14.272 12:49:51 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:14.272 12:49:51 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.272 12:49:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.272 12:49:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.272 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.272 ************************************ 00:06:14.272 START TEST spdkcli_tcp 00:06:14.272 ************************************ 00:06:14.272 12:49:51 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.272 * Looking for test storage... 00:06:14.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3483141 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3483141 00:06:14.272 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3483141 ']' 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.272 12:49:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.272 [2024-05-15 12:49:52.085794] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:14.272 [2024-05-15 12:49:52.085859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483141 ] 00:06:14.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.531 [2024-05-15 12:49:52.159281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.531 [2024-05-15 12:49:52.251500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.531 [2024-05-15 12:49:52.251503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.100 12:49:52 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.100 12:49:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:15.100 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3483266 00:06:15.100 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.100 12:49:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.359 [ 00:06:15.359 "bdev_malloc_delete", 00:06:15.359 "bdev_malloc_create", 00:06:15.359 "bdev_null_resize", 00:06:15.359 "bdev_null_delete", 00:06:15.359 "bdev_null_create", 00:06:15.359 "bdev_nvme_cuse_unregister", 00:06:15.359 "bdev_nvme_cuse_register", 00:06:15.359 "bdev_opal_new_user", 00:06:15.359 "bdev_opal_set_lock_state", 00:06:15.359 "bdev_opal_delete", 00:06:15.359 "bdev_opal_get_info", 00:06:15.359 "bdev_opal_create", 00:06:15.359 "bdev_nvme_opal_revert", 00:06:15.359 "bdev_nvme_opal_init", 00:06:15.359 "bdev_nvme_send_cmd", 00:06:15.359 "bdev_nvme_get_path_iostat", 00:06:15.359 "bdev_nvme_get_mdns_discovery_info", 00:06:15.359 "bdev_nvme_stop_mdns_discovery", 00:06:15.359 "bdev_nvme_start_mdns_discovery", 00:06:15.359 "bdev_nvme_set_multipath_policy", 00:06:15.359 "bdev_nvme_set_preferred_path", 00:06:15.359 "bdev_nvme_get_io_paths", 00:06:15.359 "bdev_nvme_remove_error_injection", 00:06:15.359 "bdev_nvme_add_error_injection", 00:06:15.359 "bdev_nvme_get_discovery_info", 00:06:15.359 "bdev_nvme_stop_discovery", 00:06:15.359 "bdev_nvme_start_discovery", 00:06:15.359 "bdev_nvme_get_controller_health_info", 00:06:15.359 "bdev_nvme_disable_controller", 00:06:15.359 "bdev_nvme_enable_controller", 00:06:15.359 "bdev_nvme_reset_controller", 00:06:15.359 "bdev_nvme_get_transport_statistics", 00:06:15.359 "bdev_nvme_apply_firmware", 00:06:15.359 "bdev_nvme_detach_controller", 00:06:15.359 "bdev_nvme_get_controllers", 00:06:15.359 "bdev_nvme_attach_controller", 00:06:15.359 "bdev_nvme_set_hotplug", 00:06:15.359 "bdev_nvme_set_options", 00:06:15.359 "bdev_passthru_delete", 00:06:15.359 "bdev_passthru_create", 00:06:15.359 "bdev_lvol_check_shallow_copy", 00:06:15.359 "bdev_lvol_start_shallow_copy", 00:06:15.359 "bdev_lvol_grow_lvstore", 00:06:15.359 "bdev_lvol_get_lvols", 00:06:15.359 "bdev_lvol_get_lvstores", 00:06:15.359 "bdev_lvol_delete", 00:06:15.359 "bdev_lvol_set_read_only", 00:06:15.359 "bdev_lvol_resize", 00:06:15.359 "bdev_lvol_decouple_parent", 00:06:15.359 "bdev_lvol_inflate", 00:06:15.359 "bdev_lvol_rename", 00:06:15.359 "bdev_lvol_clone_bdev", 00:06:15.359 "bdev_lvol_clone", 00:06:15.359 "bdev_lvol_snapshot", 00:06:15.359 "bdev_lvol_create", 00:06:15.359 "bdev_lvol_delete_lvstore", 00:06:15.359 "bdev_lvol_rename_lvstore", 00:06:15.359 "bdev_lvol_create_lvstore", 00:06:15.359 "bdev_raid_set_options", 00:06:15.359 "bdev_raid_remove_base_bdev", 00:06:15.359 "bdev_raid_add_base_bdev", 00:06:15.359 "bdev_raid_delete", 00:06:15.359 "bdev_raid_create", 00:06:15.359 "bdev_raid_get_bdevs", 00:06:15.359 "bdev_error_inject_error", 00:06:15.359 "bdev_error_delete", 00:06:15.359 "bdev_error_create", 00:06:15.359 "bdev_split_delete", 00:06:15.359 "bdev_split_create", 00:06:15.359 "bdev_delay_delete", 00:06:15.359 "bdev_delay_create", 00:06:15.359 "bdev_delay_update_latency", 00:06:15.359 "bdev_zone_block_delete", 00:06:15.359 "bdev_zone_block_create", 00:06:15.359 "blobfs_create", 00:06:15.359 "blobfs_detect", 00:06:15.359 "blobfs_set_cache_size", 00:06:15.359 "bdev_aio_delete", 00:06:15.359 "bdev_aio_rescan", 00:06:15.359 "bdev_aio_create", 00:06:15.359 "bdev_ftl_set_property", 00:06:15.359 "bdev_ftl_get_properties", 00:06:15.359 "bdev_ftl_get_stats", 00:06:15.359 "bdev_ftl_unmap", 00:06:15.359 "bdev_ftl_unload", 00:06:15.360 "bdev_ftl_delete", 00:06:15.360 "bdev_ftl_load", 00:06:15.360 "bdev_ftl_create", 00:06:15.360 "bdev_virtio_attach_controller", 00:06:15.360 "bdev_virtio_scsi_get_devices", 00:06:15.360 "bdev_virtio_detach_controller", 00:06:15.360 "bdev_virtio_blk_set_hotplug", 00:06:15.360 "bdev_iscsi_delete", 00:06:15.360 "bdev_iscsi_create", 00:06:15.360 "bdev_iscsi_set_options", 00:06:15.360 "accel_error_inject_error", 00:06:15.360 "ioat_scan_accel_module", 00:06:15.360 "dsa_scan_accel_module", 00:06:15.360 "iaa_scan_accel_module", 00:06:15.360 "keyring_file_remove_key", 00:06:15.360 "keyring_file_add_key", 00:06:15.360 "iscsi_get_histogram", 00:06:15.360 "iscsi_enable_histogram", 00:06:15.360 "iscsi_set_options", 00:06:15.360 "iscsi_get_auth_groups", 00:06:15.360 "iscsi_auth_group_remove_secret", 00:06:15.360 "iscsi_auth_group_add_secret", 00:06:15.360 "iscsi_delete_auth_group", 00:06:15.360 "iscsi_create_auth_group", 00:06:15.360 "iscsi_set_discovery_auth", 00:06:15.360 "iscsi_get_options", 00:06:15.360 "iscsi_target_node_request_logout", 00:06:15.360 "iscsi_target_node_set_redirect", 00:06:15.360 "iscsi_target_node_set_auth", 00:06:15.360 "iscsi_target_node_add_lun", 00:06:15.360 "iscsi_get_stats", 00:06:15.360 "iscsi_get_connections", 00:06:15.360 "iscsi_portal_group_set_auth", 00:06:15.360 "iscsi_start_portal_group", 00:06:15.360 "iscsi_delete_portal_group", 00:06:15.360 "iscsi_create_portal_group", 00:06:15.360 "iscsi_get_portal_groups", 00:06:15.360 "iscsi_delete_target_node", 00:06:15.360 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.360 "iscsi_target_node_add_pg_ig_maps", 00:06:15.360 "iscsi_create_target_node", 00:06:15.360 "iscsi_get_target_nodes", 00:06:15.360 "iscsi_delete_initiator_group", 00:06:15.360 "iscsi_initiator_group_remove_initiators", 00:06:15.360 "iscsi_initiator_group_add_initiators", 00:06:15.360 "iscsi_create_initiator_group", 00:06:15.360 "iscsi_get_initiator_groups", 00:06:15.360 "nvmf_set_crdt", 00:06:15.360 "nvmf_set_config", 00:06:15.360 "nvmf_set_max_subsystems", 00:06:15.360 "nvmf_stop_mdns_prr", 00:06:15.360 "nvmf_publish_mdns_prr", 00:06:15.360 "nvmf_subsystem_get_listeners", 00:06:15.360 "nvmf_subsystem_get_qpairs", 00:06:15.360 "nvmf_subsystem_get_controllers", 00:06:15.360 "nvmf_get_stats", 00:06:15.360 "nvmf_get_transports", 00:06:15.360 "nvmf_create_transport", 00:06:15.360 "nvmf_get_targets", 00:06:15.360 "nvmf_delete_target", 00:06:15.360 "nvmf_create_target", 00:06:15.360 "nvmf_subsystem_allow_any_host", 00:06:15.360 "nvmf_subsystem_remove_host", 00:06:15.360 "nvmf_subsystem_add_host", 00:06:15.360 "nvmf_ns_remove_host", 00:06:15.360 "nvmf_ns_add_host", 00:06:15.360 "nvmf_subsystem_remove_ns", 00:06:15.360 "nvmf_subsystem_add_ns", 00:06:15.360 "nvmf_subsystem_listener_set_ana_state", 00:06:15.360 "nvmf_discovery_get_referrals", 00:06:15.360 "nvmf_discovery_remove_referral", 00:06:15.360 "nvmf_discovery_add_referral", 00:06:15.360 "nvmf_subsystem_remove_listener", 00:06:15.360 "nvmf_subsystem_add_listener", 00:06:15.360 "nvmf_delete_subsystem", 00:06:15.360 "nvmf_create_subsystem", 00:06:15.360 "nvmf_get_subsystems", 00:06:15.360 "env_dpdk_get_mem_stats", 00:06:15.360 "nbd_get_disks", 00:06:15.360 "nbd_stop_disk", 00:06:15.360 "nbd_start_disk", 00:06:15.360 "ublk_recover_disk", 00:06:15.360 "ublk_get_disks", 00:06:15.360 "ublk_stop_disk", 00:06:15.360 "ublk_start_disk", 00:06:15.360 "ublk_destroy_target", 00:06:15.360 "ublk_create_target", 00:06:15.360 "virtio_blk_create_transport", 00:06:15.360 "virtio_blk_get_transports", 00:06:15.360 "vhost_controller_set_coalescing", 00:06:15.360 "vhost_get_controllers", 00:06:15.360 "vhost_delete_controller", 00:06:15.360 "vhost_create_blk_controller", 00:06:15.360 "vhost_scsi_controller_remove_target", 00:06:15.360 "vhost_scsi_controller_add_target", 00:06:15.360 "vhost_start_scsi_controller", 00:06:15.360 "vhost_create_scsi_controller", 00:06:15.360 "thread_set_cpumask", 00:06:15.360 "framework_get_scheduler", 00:06:15.360 "framework_set_scheduler", 00:06:15.360 "framework_get_reactors", 00:06:15.360 "thread_get_io_channels", 00:06:15.360 "thread_get_pollers", 00:06:15.360 "thread_get_stats", 00:06:15.360 "framework_monitor_context_switch", 00:06:15.360 "spdk_kill_instance", 00:06:15.360 "log_enable_timestamps", 00:06:15.360 "log_get_flags", 00:06:15.360 "log_clear_flag", 00:06:15.360 "log_set_flag", 00:06:15.360 "log_get_level", 00:06:15.360 "log_set_level", 00:06:15.360 "log_get_print_level", 00:06:15.360 "log_set_print_level", 00:06:15.360 "framework_enable_cpumask_locks", 00:06:15.360 "framework_disable_cpumask_locks", 00:06:15.360 "framework_wait_init", 00:06:15.360 "framework_start_init", 00:06:15.360 "scsi_get_devices", 00:06:15.360 "bdev_get_histogram", 00:06:15.360 "bdev_enable_histogram", 00:06:15.360 "bdev_set_qos_limit", 00:06:15.360 "bdev_set_qd_sampling_period", 00:06:15.360 "bdev_get_bdevs", 00:06:15.360 "bdev_reset_iostat", 00:06:15.360 "bdev_get_iostat", 00:06:15.360 "bdev_examine", 00:06:15.360 "bdev_wait_for_examine", 00:06:15.360 "bdev_set_options", 00:06:15.360 "notify_get_notifications", 00:06:15.360 "notify_get_types", 00:06:15.360 "accel_get_stats", 00:06:15.360 "accel_set_options", 00:06:15.360 "accel_set_driver", 00:06:15.360 "accel_crypto_key_destroy", 00:06:15.360 "accel_crypto_keys_get", 00:06:15.360 "accel_crypto_key_create", 00:06:15.360 "accel_assign_opc", 00:06:15.360 "accel_get_module_info", 00:06:15.360 "accel_get_opc_assignments", 00:06:15.360 "vmd_rescan", 00:06:15.360 "vmd_remove_device", 00:06:15.360 "vmd_enable", 00:06:15.360 "sock_get_default_impl", 00:06:15.360 "sock_set_default_impl", 00:06:15.360 "sock_impl_set_options", 00:06:15.360 "sock_impl_get_options", 00:06:15.360 "iobuf_get_stats", 00:06:15.360 "iobuf_set_options", 00:06:15.360 "framework_get_pci_devices", 00:06:15.360 "framework_get_config", 00:06:15.360 "framework_get_subsystems", 00:06:15.360 "trace_get_info", 00:06:15.360 "trace_get_tpoint_group_mask", 00:06:15.360 "trace_disable_tpoint_group", 00:06:15.360 "trace_enable_tpoint_group", 00:06:15.360 "trace_clear_tpoint_mask", 00:06:15.360 "trace_set_tpoint_mask", 00:06:15.360 "keyring_get_keys", 00:06:15.360 "spdk_get_version", 00:06:15.360 "rpc_get_methods" 00:06:15.360 ] 00:06:15.360 12:49:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.360 12:49:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.360 12:49:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3483141 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3483141 ']' 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3483141 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483141 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483141' 00:06:15.360 killing process with pid 3483141 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3483141 00:06:15.360 12:49:53 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3483141 00:06:15.927 00:06:15.927 real 0m1.647s 00:06:15.927 user 0m2.982s 00:06:15.927 sys 0m0.515s 00:06:15.927 12:49:53 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.927 12:49:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.927 ************************************ 00:06:15.927 END TEST spdkcli_tcp 00:06:15.927 ************************************ 00:06:15.927 12:49:53 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.927 12:49:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.927 12:49:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.927 12:49:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.927 ************************************ 00:06:15.927 START TEST dpdk_mem_utility 00:06:15.927 ************************************ 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.927 * Looking for test storage... 00:06:15.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:15.927 12:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.927 12:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3483503 00:06:15.927 12:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.927 12:49:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3483503 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3483503 ']' 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.927 12:49:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.186 [2024-05-15 12:49:53.809950] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:16.186 [2024-05-15 12:49:53.810019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483503 ] 00:06:16.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.186 [2024-05-15 12:49:53.882033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.186 [2024-05-15 12:49:53.969316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.754 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.754 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:16.754 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.754 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.754 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.754 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.754 { 00:06:16.754 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.754 } 00:06:16.754 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.754 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:17.013 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:17.013 1 heaps totaling size 814.000000 MiB 00:06:17.013 size: 814.000000 MiB heap id: 0 00:06:17.013 end heaps---------- 00:06:17.013 8 mempools totaling size 598.116089 MiB 00:06:17.013 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:17.013 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:17.013 size: 84.521057 MiB name: bdev_io_3483503 00:06:17.013 size: 51.011292 MiB name: evtpool_3483503 00:06:17.013 size: 50.003479 MiB name: msgpool_3483503 00:06:17.013 size: 21.763794 MiB name: PDU_Pool 00:06:17.013 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:17.013 size: 0.026123 MiB name: Session_Pool 00:06:17.013 end mempools------- 00:06:17.013 6 memzones totaling size 4.142822 MiB 00:06:17.013 size: 1.000366 MiB name: RG_ring_0_3483503 00:06:17.013 size: 1.000366 MiB name: RG_ring_1_3483503 00:06:17.013 size: 1.000366 MiB name: RG_ring_4_3483503 00:06:17.013 size: 1.000366 MiB name: RG_ring_5_3483503 00:06:17.013 size: 0.125366 MiB name: RG_ring_2_3483503 00:06:17.013 size: 0.015991 MiB name: RG_ring_3_3483503 00:06:17.013 end memzones------- 00:06:17.013 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:17.013 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:17.013 list of free elements. size: 12.519348 MiB 00:06:17.013 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:17.013 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:17.013 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:17.013 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:17.013 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:17.013 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:17.013 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:17.013 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:17.013 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:17.013 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:17.013 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:17.013 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:17.013 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:17.013 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:17.013 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:17.013 list of standard malloc elements. size: 199.218079 MiB 00:06:17.013 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:17.013 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:17.013 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:17.013 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:17.013 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:17.013 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:17.013 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:17.013 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:17.013 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:17.013 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:17.013 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:17.013 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:17.013 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:17.013 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:17.013 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:17.013 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:17.013 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:17.014 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:17.014 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:17.014 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:17.014 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:17.014 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:17.014 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:17.014 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:17.014 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:17.014 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:17.014 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:17.014 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:17.014 list of memzone associated elements. size: 602.262573 MiB 00:06:17.014 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:17.014 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:17.014 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:17.014 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:17.014 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:17.014 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3483503_0 00:06:17.014 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:17.014 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3483503_0 00:06:17.014 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:17.014 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3483503_0 00:06:17.014 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:17.014 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:17.014 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:17.014 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:17.014 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:17.014 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3483503 00:06:17.014 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:17.014 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3483503 00:06:17.014 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:17.014 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3483503 00:06:17.014 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:17.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:17.014 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:17.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:17.014 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:17.014 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:17.014 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:17.014 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:17.014 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:17.014 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3483503 00:06:17.014 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:17.014 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3483503 00:06:17.014 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:17.014 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3483503 00:06:17.014 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:17.014 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3483503 00:06:17.014 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:17.014 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3483503 00:06:17.014 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:17.014 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:17.014 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:17.014 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:17.014 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:17.014 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:17.014 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:17.014 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3483503 00:06:17.014 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:17.014 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:17.014 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:17.014 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:17.014 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:17.014 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3483503 00:06:17.014 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:17.014 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:17.014 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:17.014 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3483503 00:06:17.014 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:17.014 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3483503 00:06:17.014 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:17.014 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:17.014 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:17.014 12:49:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3483503 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3483503 ']' 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3483503 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483503 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483503' 00:06:17.014 killing process with pid 3483503 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3483503 00:06:17.014 12:49:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3483503 00:06:17.273 00:06:17.273 real 0m1.471s 00:06:17.273 user 0m1.477s 00:06:17.273 sys 0m0.461s 00:06:17.273 12:49:55 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.273 12:49:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.273 ************************************ 00:06:17.273 END TEST dpdk_mem_utility 00:06:17.273 ************************************ 00:06:17.531 12:49:55 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:17.531 12:49:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.531 12:49:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.531 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.531 ************************************ 00:06:17.531 START TEST event 00:06:17.531 ************************************ 00:06:17.531 12:49:55 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:17.531 * Looking for test storage... 00:06:17.531 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:17.531 12:49:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.531 12:49:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.531 12:49:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.531 12:49:55 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:17.531 12:49:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.531 12:49:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.531 ************************************ 00:06:17.531 START TEST event_perf 00:06:17.531 ************************************ 00:06:17.531 12:49:55 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.531 Running I/O for 1 seconds...[2024-05-15 12:49:55.372894] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:17.531 [2024-05-15 12:49:55.372979] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483749 ] 00:06:17.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.793 [2024-05-15 12:49:55.449634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.793 [2024-05-15 12:49:55.536304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.793 [2024-05-15 12:49:55.536405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.793 [2024-05-15 12:49:55.536516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.793 [2024-05-15 12:49:55.536508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.776 Running I/O for 1 seconds... 00:06:18.776 lcore 0: 202127 00:06:18.776 lcore 1: 202126 00:06:18.776 lcore 2: 202127 00:06:18.776 lcore 3: 202126 00:06:18.776 done. 00:06:18.776 00:06:18.776 real 0m1.290s 00:06:18.776 user 0m4.182s 00:06:18.776 sys 0m0.100s 00:06:18.776 12:49:56 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.776 12:49:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.776 ************************************ 00:06:18.776 END TEST event_perf 00:06:18.776 ************************************ 00:06:19.034 12:49:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:19.034 12:49:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:19.034 12:49:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.034 12:49:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.034 ************************************ 00:06:19.034 START TEST event_reactor 00:06:19.034 ************************************ 00:06:19.034 12:49:56 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:19.034 [2024-05-15 12:49:56.739789] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:19.034 [2024-05-15 12:49:56.739855] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483957 ] 00:06:19.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.034 [2024-05-15 12:49:56.811023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.034 [2024-05-15 12:49:56.894398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.408 test_start 00:06:20.408 oneshot 00:06:20.408 tick 100 00:06:20.408 tick 100 00:06:20.408 tick 250 00:06:20.408 tick 100 00:06:20.408 tick 100 00:06:20.408 tick 100 00:06:20.408 tick 250 00:06:20.408 tick 500 00:06:20.408 tick 100 00:06:20.408 tick 100 00:06:20.408 tick 250 00:06:20.408 tick 100 00:06:20.408 tick 100 00:06:20.408 test_end 00:06:20.408 00:06:20.408 real 0m1.279s 00:06:20.408 user 0m1.191s 00:06:20.408 sys 0m0.083s 00:06:20.408 12:49:57 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.408 12:49:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:20.408 ************************************ 00:06:20.408 END TEST event_reactor 00:06:20.408 ************************************ 00:06:20.408 12:49:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.408 12:49:58 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:20.408 12:49:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.408 12:49:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.408 ************************************ 00:06:20.408 START TEST event_reactor_perf 00:06:20.408 ************************************ 00:06:20.408 12:49:58 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.408 [2024-05-15 12:49:58.104368] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:20.408 [2024-05-15 12:49:58.104446] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484161 ] 00:06:20.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.408 [2024-05-15 12:49:58.178246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.408 [2024-05-15 12:49:58.261272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.789 test_start 00:06:21.789 test_end 00:06:21.789 Performance: 514066 events per second 00:06:21.789 00:06:21.789 real 0m1.277s 00:06:21.789 user 0m1.176s 00:06:21.789 sys 0m0.096s 00:06:21.789 12:49:59 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.789 12:49:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.789 ************************************ 00:06:21.789 END TEST event_reactor_perf 00:06:21.789 ************************************ 00:06:21.789 12:49:59 event -- event/event.sh@49 -- # uname -s 00:06:21.789 12:49:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.789 12:49:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.789 12:49:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.789 12:49:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.789 12:49:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.789 ************************************ 00:06:21.789 START TEST event_scheduler 00:06:21.789 ************************************ 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.789 * Looking for test storage... 00:06:21.789 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:21.789 12:49:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.789 12:49:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3484387 00:06:21.789 12:49:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.789 12:49:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.789 12:49:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3484387 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3484387 ']' 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.789 12:49:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.789 [2024-05-15 12:49:59.580239] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:21.789 [2024-05-15 12:49:59.580302] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484387 ] 00:06:21.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.789 [2024-05-15 12:49:59.648063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.046 [2024-05-15 12:49:59.742756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.047 [2024-05-15 12:49:59.742835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.047 [2024-05-15 12:49:59.742913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.047 [2024-05-15 12:49:59.742915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:22.614 12:50:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.614 POWER: Env isn't set yet! 00:06:22.614 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:22.614 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.614 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.614 POWER: Attempting to initialise PSTAT power management... 00:06:22.614 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:22.614 POWER: Initialized successfully for lcore 0 power management 00:06:22.614 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:22.614 POWER: Initialized successfully for lcore 1 power management 00:06:22.614 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:22.614 POWER: Initialized successfully for lcore 2 power management 00:06:22.614 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:22.614 POWER: Initialized successfully for lcore 3 power management 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.614 12:50:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.614 12:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 [2024-05-15 12:50:00.537772] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.874 12:50:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.874 12:50:00 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.874 12:50:00 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 ************************************ 00:06:22.874 START TEST scheduler_create_thread 00:06:22.874 ************************************ 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 2 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 3 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 4 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 5 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 6 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 7 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 8 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 9 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.874 10 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.874 12:50:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.440 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.440 12:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.440 12:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.440 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.440 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.378 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.378 12:50:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.378 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.378 12:50:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.316 12:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.316 12:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.316 12:50:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.316 12:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.316 12:50:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.251 12:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.251 00:06:26.251 real 0m3.231s 00:06:26.251 user 0m0.025s 00:06:26.251 sys 0m0.006s 00:06:26.251 12:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.251 12:50:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.251 ************************************ 00:06:26.251 END TEST scheduler_create_thread 00:06:26.251 ************************************ 00:06:26.251 12:50:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.251 12:50:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3484387 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3484387 ']' 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3484387 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3484387 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3484387' 00:06:26.251 killing process with pid 3484387 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3484387 00:06:26.251 12:50:03 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3484387 00:06:26.509 [2024-05-15 12:50:04.199054] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.510 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:26.510 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:26.510 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:26.510 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:26.510 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:26.510 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:26.510 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:26.510 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:26.768 00:06:26.768 real 0m5.064s 00:06:26.768 user 0m10.259s 00:06:26.768 sys 0m0.460s 00:06:26.768 12:50:04 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.768 12:50:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.768 ************************************ 00:06:26.768 END TEST event_scheduler 00:06:26.768 ************************************ 00:06:26.768 12:50:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.768 12:50:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.768 12:50:04 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.768 12:50:04 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.768 12:50:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.768 ************************************ 00:06:26.768 START TEST app_repeat 00:06:26.768 ************************************ 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3485166 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3485166' 00:06:26.768 Process app_repeat pid: 3485166 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.768 spdk_app_start Round 0 00:06:26.768 12:50:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485166 /var/tmp/spdk-nbd.sock 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3485166 ']' 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.768 12:50:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.768 [2024-05-15 12:50:04.633623] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:26.768 [2024-05-15 12:50:04.633698] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485166 ] 00:06:27.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.026 [2024-05-15 12:50:04.706723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.027 [2024-05-15 12:50:04.795018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.027 [2024-05-15 12:50:04.795021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.594 12:50:05 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.595 12:50:05 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:27.595 12:50:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.853 Malloc0 00:06:27.853 12:50:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.112 Malloc1 00:06:28.112 12:50:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.112 12:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.371 /dev/nbd0 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.371 1+0 records in 00:06:28.371 1+0 records out 00:06:28.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019876 s, 20.6 MB/s 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.371 /dev/nbd1 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.371 12:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:28.371 12:50:06 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.631 1+0 records in 00:06:28.631 1+0 records out 00:06:28.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246306 s, 16.6 MB/s 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:28.631 12:50:06 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.631 { 00:06:28.631 "nbd_device": "/dev/nbd0", 00:06:28.631 "bdev_name": "Malloc0" 00:06:28.631 }, 00:06:28.631 { 00:06:28.631 "nbd_device": "/dev/nbd1", 00:06:28.631 "bdev_name": "Malloc1" 00:06:28.631 } 00:06:28.631 ]' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.631 { 00:06:28.631 "nbd_device": "/dev/nbd0", 00:06:28.631 "bdev_name": "Malloc0" 00:06:28.631 }, 00:06:28.631 { 00:06:28.631 "nbd_device": "/dev/nbd1", 00:06:28.631 "bdev_name": "Malloc1" 00:06:28.631 } 00:06:28.631 ]' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.631 /dev/nbd1' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.631 /dev/nbd1' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.631 12:50:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.631 256+0 records in 00:06:28.631 256+0 records out 00:06:28.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108651 s, 96.5 MB/s 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.891 256+0 records in 00:06:28.891 256+0 records out 00:06:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189146 s, 55.4 MB/s 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.891 256+0 records in 00:06:28.891 256+0 records out 00:06:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217719 s, 48.2 MB/s 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.891 12:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.150 12:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.408 12:50:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.408 12:50:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.667 12:50:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.925 [2024-05-15 12:50:07.649192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.925 [2024-05-15 12:50:07.730449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.925 [2024-05-15 12:50:07.730451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.925 [2024-05-15 12:50:07.778697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.925 [2024-05-15 12:50:07.778745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.213 12:50:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.213 12:50:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.213 spdk_app_start Round 1 00:06:33.213 12:50:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485166 /var/tmp/spdk-nbd.sock 00:06:33.213 12:50:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3485166 ']' 00:06:33.213 12:50:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.213 12:50:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.213 12:50:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.213 12:50:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.214 12:50:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.214 12:50:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.214 12:50:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:33.214 12:50:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.214 Malloc0 00:06:33.214 12:50:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.214 Malloc1 00:06:33.214 12:50:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.214 12:50:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.473 /dev/nbd0 00:06:33.473 12:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.473 12:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.473 1+0 records in 00:06:33.473 1+0 records out 00:06:33.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231081 s, 17.7 MB/s 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.473 12:50:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.473 12:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.473 12:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.473 12:50:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.732 /dev/nbd1 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.732 1+0 records in 00:06:33.732 1+0 records out 00:06:33.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167755 s, 24.4 MB/s 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:33.732 12:50:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.732 { 00:06:33.732 "nbd_device": "/dev/nbd0", 00:06:33.732 "bdev_name": "Malloc0" 00:06:33.732 }, 00:06:33.732 { 00:06:33.732 "nbd_device": "/dev/nbd1", 00:06:33.732 "bdev_name": "Malloc1" 00:06:33.732 } 00:06:33.732 ]' 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.732 { 00:06:33.732 "nbd_device": "/dev/nbd0", 00:06:33.732 "bdev_name": "Malloc0" 00:06:33.732 }, 00:06:33.732 { 00:06:33.732 "nbd_device": "/dev/nbd1", 00:06:33.732 "bdev_name": "Malloc1" 00:06:33.732 } 00:06:33.732 ]' 00:06:33.732 12:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.991 /dev/nbd1' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.991 /dev/nbd1' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.991 256+0 records in 00:06:33.991 256+0 records out 00:06:33.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457097 s, 229 MB/s 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.991 256+0 records in 00:06:33.991 256+0 records out 00:06:33.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198923 s, 52.7 MB/s 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.991 256+0 records in 00:06:33.991 256+0 records out 00:06:33.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207843 s, 50.5 MB/s 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.991 12:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.249 12:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.249 12:50:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.508 12:50:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.508 12:50:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.767 12:50:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.027 [2024-05-15 12:50:12.770795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.027 [2024-05-15 12:50:12.853252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.027 [2024-05-15 12:50:12.853254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.027 [2024-05-15 12:50:12.902525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.027 [2024-05-15 12:50:12.902575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.316 12:50:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.316 12:50:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:38.316 spdk_app_start Round 2 00:06:38.316 12:50:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485166 /var/tmp/spdk-nbd.sock 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3485166 ']' 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.316 12:50:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.317 12:50:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:38.317 12:50:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.317 Malloc0 00:06:38.317 12:50:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.317 Malloc1 00:06:38.317 12:50:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.317 12:50:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.575 /dev/nbd0 00:06:38.575 12:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.575 12:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.575 1+0 records in 00:06:38.575 1+0 records out 00:06:38.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209814 s, 19.5 MB/s 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:38.575 12:50:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:38.575 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.575 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.575 12:50:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.834 /dev/nbd1 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.834 1+0 records in 00:06:38.834 1+0 records out 00:06:38.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178611 s, 22.9 MB/s 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:38.834 12:50:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.834 12:50:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.835 12:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.835 12:50:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.835 { 00:06:38.835 "nbd_device": "/dev/nbd0", 00:06:38.835 "bdev_name": "Malloc0" 00:06:38.835 }, 00:06:38.835 { 00:06:38.835 "nbd_device": "/dev/nbd1", 00:06:38.835 "bdev_name": "Malloc1" 00:06:38.835 } 00:06:38.835 ]' 00:06:38.835 12:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.835 { 00:06:38.835 "nbd_device": "/dev/nbd0", 00:06:38.835 "bdev_name": "Malloc0" 00:06:38.835 }, 00:06:38.835 { 00:06:38.835 "nbd_device": "/dev/nbd1", 00:06:38.835 "bdev_name": "Malloc1" 00:06:38.835 } 00:06:38.835 ]' 00:06:38.835 12:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.094 /dev/nbd1' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.094 /dev/nbd1' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.094 256+0 records in 00:06:39.094 256+0 records out 00:06:39.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112553 s, 93.2 MB/s 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.094 256+0 records in 00:06:39.094 256+0 records out 00:06:39.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021154 s, 49.6 MB/s 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.094 256+0 records in 00:06:39.094 256+0 records out 00:06:39.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216716 s, 48.4 MB/s 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.094 12:50:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.353 12:50:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.612 12:50:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.612 12:50:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.871 12:50:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.130 [2024-05-15 12:50:17.869947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.130 [2024-05-15 12:50:17.954106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.130 [2024-05-15 12:50:17.954108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.130 [2024-05-15 12:50:18.002958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.130 [2024-05-15 12:50:18.003005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.422 12:50:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3485166 /var/tmp/spdk-nbd.sock 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3485166 ']' 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:43.422 12:50:20 event.app_repeat -- event/event.sh@39 -- # killprocess 3485166 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3485166 ']' 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3485166 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3485166 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3485166' 00:06:43.422 killing process with pid 3485166 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3485166 00:06:43.422 12:50:20 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3485166 00:06:43.422 spdk_app_start is called in Round 0. 00:06:43.422 Shutdown signal received, stop current app iteration 00:06:43.422 Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 reinitialization... 00:06:43.422 spdk_app_start is called in Round 1. 00:06:43.422 Shutdown signal received, stop current app iteration 00:06:43.422 Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 reinitialization... 00:06:43.422 spdk_app_start is called in Round 2. 00:06:43.422 Shutdown signal received, stop current app iteration 00:06:43.422 Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 reinitialization... 00:06:43.422 spdk_app_start is called in Round 3. 00:06:43.422 Shutdown signal received, stop current app iteration 00:06:43.422 12:50:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.422 12:50:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.422 00:06:43.422 real 0m16.486s 00:06:43.422 user 0m34.844s 00:06:43.422 sys 0m3.120s 00:06:43.422 12:50:21 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.422 12:50:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.422 ************************************ 00:06:43.422 END TEST app_repeat 00:06:43.422 ************************************ 00:06:43.422 12:50:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.422 12:50:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.422 12:50:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.422 12:50:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.422 12:50:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.422 ************************************ 00:06:43.422 START TEST cpu_locks 00:06:43.422 ************************************ 00:06:43.422 12:50:21 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.422 * Looking for test storage... 00:06:43.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:43.422 12:50:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.422 12:50:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.422 12:50:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.422 12:50:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.422 12:50:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.422 12:50:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.422 12:50:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.681 ************************************ 00:06:43.681 START TEST default_locks 00:06:43.682 ************************************ 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3487580 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3487580 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3487580 ']' 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.682 12:50:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.682 [2024-05-15 12:50:21.364883] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:43.682 [2024-05-15 12:50:21.364935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487580 ] 00:06:43.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.682 [2024-05-15 12:50:21.435994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.682 [2024-05-15 12:50:21.524099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.618 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.618 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:44.618 12:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3487580 00:06:44.618 12:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3487580 00:06:44.618 12:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.877 lslocks: write error 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3487580 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3487580 ']' 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3487580 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.877 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487580 00:06:45.136 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.136 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.136 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487580' 00:06:45.136 killing process with pid 3487580 00:06:45.136 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3487580 00:06:45.136 12:50:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3487580 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3487580 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3487580 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3487580 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3487580 ']' 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.396 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3487580) - No such process 00:06:45.396 ERROR: process (pid: 3487580) is no longer running 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.396 00:06:45.396 real 0m1.836s 00:06:45.396 user 0m1.889s 00:06:45.396 sys 0m0.657s 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.396 12:50:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.396 ************************************ 00:06:45.396 END TEST default_locks 00:06:45.396 ************************************ 00:06:45.396 12:50:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.396 12:50:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.396 12:50:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.396 12:50:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.397 ************************************ 00:06:45.397 START TEST default_locks_via_rpc 00:06:45.397 ************************************ 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3487803 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3487803 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3487803 ']' 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.397 12:50:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.658 [2024-05-15 12:50:23.289984] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:45.658 [2024-05-15 12:50:23.290039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487803 ] 00:06:45.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.658 [2024-05-15 12:50:23.359744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.658 [2024-05-15 12:50:23.450361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3487803 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3487803 00:06:46.312 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.571 12:50:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3487803 00:06:46.571 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3487803 ']' 00:06:46.572 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3487803 00:06:46.572 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:46.572 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.572 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487803 00:06:46.831 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.831 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.831 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487803' 00:06:46.831 killing process with pid 3487803 00:06:46.831 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3487803 00:06:46.831 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3487803 00:06:47.090 00:06:47.090 real 0m1.601s 00:06:47.090 user 0m1.628s 00:06:47.090 sys 0m0.563s 00:06:47.090 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.090 12:50:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 ************************************ 00:06:47.090 END TEST default_locks_via_rpc 00:06:47.090 ************************************ 00:06:47.090 12:50:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.090 12:50:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.090 12:50:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.090 12:50:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.090 ************************************ 00:06:47.090 START TEST non_locking_app_on_locked_coremask 00:06:47.090 ************************************ 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3488056 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3488056 /var/tmp/spdk.sock 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488056 ']' 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.090 12:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.349 [2024-05-15 12:50:24.981174] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:47.349 [2024-05-15 12:50:24.981233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488056 ] 00:06:47.349 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.349 [2024-05-15 12:50:25.053179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.349 [2024-05-15 12:50:25.141880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3488205 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3488205 /var/tmp/spdk2.sock 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488205 ']' 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.918 12:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.177 [2024-05-15 12:50:25.840508] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:48.177 [2024-05-15 12:50:25.840569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488205 ] 00:06:48.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.177 [2024-05-15 12:50:25.935195] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.177 [2024-05-15 12:50:25.935228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.436 [2024-05-15 12:50:26.102541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.004 12:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.004 12:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:49.004 12:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3488056 00:06:49.004 12:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.004 12:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3488056 00:06:49.940 lslocks: write error 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3488056 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488056 ']' 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488056 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488056 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488056' 00:06:49.940 killing process with pid 3488056 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3488056 00:06:49.940 12:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3488056 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3488205 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488205 ']' 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488205 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488205 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488205' 00:06:50.877 killing process with pid 3488205 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3488205 00:06:50.877 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3488205 00:06:51.136 00:06:51.136 real 0m3.915s 00:06:51.136 user 0m4.120s 00:06:51.136 sys 0m1.371s 00:06:51.136 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.136 12:50:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.136 ************************************ 00:06:51.136 END TEST non_locking_app_on_locked_coremask 00:06:51.136 ************************************ 00:06:51.136 12:50:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.136 12:50:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.136 12:50:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.136 12:50:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.136 ************************************ 00:06:51.136 START TEST locking_app_on_unlocked_coremask 00:06:51.136 ************************************ 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3488606 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3488606 /var/tmp/spdk.sock 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488606 ']' 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.136 12:50:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.136 [2024-05-15 12:50:28.971767] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:51.136 [2024-05-15 12:50:28.971821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488606 ] 00:06:51.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.395 [2024-05-15 12:50:29.045077] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.395 [2024-05-15 12:50:29.045108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.395 [2024-05-15 12:50:29.134239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3488790 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3488790 /var/tmp/spdk2.sock 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488790 ']' 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.964 12:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.964 [2024-05-15 12:50:29.795275] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:51.964 [2024-05-15 12:50:29.795333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488790 ] 00:06:51.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.223 [2024-05-15 12:50:29.890719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.223 [2024-05-15 12:50:30.065775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.790 12:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.790 12:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:52.790 12:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3488790 00:06:52.790 12:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3488790 00:06:52.790 12:50:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.169 lslocks: write error 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3488606 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488606 ']' 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488606 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488606 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488606' 00:06:54.169 killing process with pid 3488606 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3488606 00:06:54.169 12:50:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3488606 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3488790 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488790 ']' 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488790 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488790 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488790' 00:06:54.736 killing process with pid 3488790 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3488790 00:06:54.736 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3488790 00:06:54.995 00:06:54.995 real 0m3.938s 00:06:54.995 user 0m4.151s 00:06:54.995 sys 0m1.252s 00:06:54.995 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.995 12:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.995 ************************************ 00:06:54.995 END TEST locking_app_on_unlocked_coremask 00:06:54.995 ************************************ 00:06:55.254 12:50:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.254 12:50:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.254 12:50:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.254 12:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.254 ************************************ 00:06:55.254 START TEST locking_app_on_locked_coremask 00:06:55.254 ************************************ 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3489195 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3489195 /var/tmp/spdk.sock 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3489195 ']' 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.254 12:50:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.254 [2024-05-15 12:50:32.995012] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:55.254 [2024-05-15 12:50:32.995165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489195 ] 00:06:55.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.254 [2024-05-15 12:50:33.064871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.514 [2024-05-15 12:50:33.153426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3489370 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3489370 /var/tmp/spdk2.sock 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3489370 /var/tmp/spdk2.sock 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3489370 /var/tmp/spdk2.sock 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3489370 ']' 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.081 12:50:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.081 [2024-05-15 12:50:33.808538] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:56.081 [2024-05-15 12:50:33.808599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489370 ] 00:06:56.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.081 [2024-05-15 12:50:33.905543] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3489195 has claimed it. 00:06:56.081 [2024-05-15 12:50:33.905577] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.649 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3489370) - No such process 00:06:56.649 ERROR: process (pid: 3489370) is no longer running 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3489195 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3489195 00:06:56.649 12:50:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.217 lslocks: write error 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3489195 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3489195 ']' 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3489195 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.217 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489195 00:06:57.475 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.476 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.476 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489195' 00:06:57.476 killing process with pid 3489195 00:06:57.476 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3489195 00:06:57.476 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3489195 00:06:57.734 00:06:57.734 real 0m2.518s 00:06:57.734 user 0m2.671s 00:06:57.734 sys 0m0.799s 00:06:57.734 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.734 12:50:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.734 ************************************ 00:06:57.734 END TEST locking_app_on_locked_coremask 00:06:57.734 ************************************ 00:06:57.734 12:50:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.734 12:50:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.734 12:50:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.734 12:50:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.734 ************************************ 00:06:57.734 START TEST locking_overlapped_coremask 00:06:57.734 ************************************ 00:06:57.734 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:57.734 12:50:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3489586 00:06:57.734 12:50:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3489586 /var/tmp/spdk.sock 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3489586 ']' 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.735 12:50:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.735 [2024-05-15 12:50:35.605448] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:57.735 [2024-05-15 12:50:35.605505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489586 ] 00:06:57.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.994 [2024-05-15 12:50:35.678267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.994 [2024-05-15 12:50:35.765965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.994 [2024-05-15 12:50:35.766052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.994 [2024-05-15 12:50:35.766054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3489771 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3489771 /var/tmp/spdk2.sock 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3489771 /var/tmp/spdk2.sock 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3489771 /var/tmp/spdk2.sock 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3489771 ']' 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.561 12:50:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.820 [2024-05-15 12:50:36.461636] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:58.820 [2024-05-15 12:50:36.461694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489771 ] 00:06:58.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.820 [2024-05-15 12:50:36.561870] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3489586 has claimed it. 00:06:58.820 [2024-05-15 12:50:36.561913] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.388 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3489771) - No such process 00:06:59.388 ERROR: process (pid: 3489771) is no longer running 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3489586 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3489586 ']' 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3489586 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489586 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489586' 00:06:59.388 killing process with pid 3489586 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3489586 00:06:59.388 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3489586 00:06:59.648 00:06:59.648 real 0m1.976s 00:06:59.648 user 0m5.398s 00:06:59.648 sys 0m0.492s 00:06:59.648 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.648 12:50:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.907 ************************************ 00:06:59.907 END TEST locking_overlapped_coremask 00:06:59.907 ************************************ 00:06:59.907 12:50:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.907 12:50:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.907 12:50:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.907 12:50:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.907 ************************************ 00:06:59.907 START TEST locking_overlapped_coremask_via_rpc 00:06:59.907 ************************************ 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3489979 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3489979 /var/tmp/spdk.sock 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489979 ']' 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.907 12:50:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.907 [2024-05-15 12:50:37.672331] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:06:59.907 [2024-05-15 12:50:37.672378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489979 ] 00:06:59.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.907 [2024-05-15 12:50:37.740768] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.907 [2024-05-15 12:50:37.740799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.166 [2024-05-15 12:50:37.828672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.166 [2024-05-15 12:50:37.828761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.166 [2024-05-15 12:50:37.828763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3489996 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3489996 /var/tmp/spdk2.sock 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489996 ']' 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.733 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.734 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.734 12:50:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.734 [2024-05-15 12:50:38.525119] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:00.734 [2024-05-15 12:50:38.525178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489996 ] 00:07:00.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.992 [2024-05-15 12:50:38.624118] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.992 [2024-05-15 12:50:38.624153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.992 [2024-05-15 12:50:38.794593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.992 [2024-05-15 12:50:38.798111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.992 [2024-05-15 12:50:38.798112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.561 [2024-05-15 12:50:39.359126] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3489979 has claimed it. 00:07:01.561 request: 00:07:01.561 { 00:07:01.561 "method": "framework_enable_cpumask_locks", 00:07:01.561 "req_id": 1 00:07:01.561 } 00:07:01.561 Got JSON-RPC error response 00:07:01.561 response: 00:07:01.561 { 00:07:01.561 "code": -32603, 00:07:01.561 "message": "Failed to claim CPU core: 2" 00:07:01.561 } 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3489979 /var/tmp/spdk.sock 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489979 ']' 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.561 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3489996 /var/tmp/spdk2.sock 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489996 ']' 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.820 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.086 00:07:02.086 real 0m2.132s 00:07:02.086 user 0m0.870s 00:07:02.086 sys 0m0.187s 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.086 12:50:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.086 ************************************ 00:07:02.086 END TEST locking_overlapped_coremask_via_rpc 00:07:02.086 ************************************ 00:07:02.086 12:50:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.086 12:50:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3489979 ]] 00:07:02.086 12:50:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3489979 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489979 ']' 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489979 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489979 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489979' 00:07:02.086 killing process with pid 3489979 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3489979 00:07:02.086 12:50:39 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3489979 00:07:02.350 12:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3489996 ]] 00:07:02.350 12:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3489996 00:07:02.350 12:50:40 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489996 ']' 00:07:02.350 12:50:40 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489996 00:07:02.350 12:50:40 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489996 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489996' 00:07:02.608 killing process with pid 3489996 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3489996 00:07:02.608 12:50:40 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3489996 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3489979 ]] 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3489979 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489979 ']' 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489979 00:07:02.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3489979) - No such process 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3489979 is not found' 00:07:02.867 Process with pid 3489979 is not found 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3489996 ]] 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3489996 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489996 ']' 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489996 00:07:02.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3489996) - No such process 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3489996 is not found' 00:07:02.867 Process with pid 3489996 is not found 00:07:02.867 12:50:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.867 00:07:02.867 real 0m19.495s 00:07:02.867 user 0m31.723s 00:07:02.867 sys 0m6.391s 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.867 12:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.867 ************************************ 00:07:02.867 END TEST cpu_locks 00:07:02.867 ************************************ 00:07:02.867 00:07:02.867 real 0m45.508s 00:07:02.867 user 1m23.592s 00:07:02.867 sys 0m10.668s 00:07:02.867 12:50:40 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.867 12:50:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.867 ************************************ 00:07:02.867 END TEST event 00:07:02.867 ************************************ 00:07:03.127 12:50:40 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:03.127 12:50:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.127 12:50:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.127 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 START TEST thread 00:07:03.127 ************************************ 00:07:03.127 12:50:40 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:03.127 * Looking for test storage... 00:07:03.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:03.127 12:50:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.127 12:50:40 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:03.127 12:50:40 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.127 12:50:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 START TEST thread_poller_perf 00:07:03.127 ************************************ 00:07:03.127 12:50:40 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.127 [2024-05-15 12:50:40.975787] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:03.127 [2024-05-15 12:50:40.975854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490471 ] 00:07:03.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.387 [2024-05-15 12:50:41.050798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.387 [2024-05-15 12:50:41.139449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.387 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.764 ====================================== 00:07:04.764 busy:2307795580 (cyc) 00:07:04.764 total_run_count: 416000 00:07:04.764 tsc_hz: 2300000000 (cyc) 00:07:04.764 ====================================== 00:07:04.764 poller_cost: 5547 (cyc), 2411 (nsec) 00:07:04.764 00:07:04.764 real 0m1.293s 00:07:04.764 user 0m1.189s 00:07:04.764 sys 0m0.099s 00:07:04.764 12:50:42 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.765 12:50:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.765 ************************************ 00:07:04.765 END TEST thread_poller_perf 00:07:04.765 ************************************ 00:07:04.765 12:50:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.765 12:50:42 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:04.765 12:50:42 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.765 12:50:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.765 ************************************ 00:07:04.765 START TEST thread_poller_perf 00:07:04.765 ************************************ 00:07:04.765 12:50:42 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.765 [2024-05-15 12:50:42.361174] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:04.765 [2024-05-15 12:50:42.361250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490672 ] 00:07:04.765 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.765 [2024-05-15 12:50:42.438063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.765 [2024-05-15 12:50:42.529823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.765 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.143 ====================================== 00:07:06.143 busy:2301687846 (cyc) 00:07:06.143 total_run_count: 5526000 00:07:06.143 tsc_hz: 2300000000 (cyc) 00:07:06.143 ====================================== 00:07:06.143 poller_cost: 416 (cyc), 180 (nsec) 00:07:06.143 00:07:06.143 real 0m1.295s 00:07:06.143 user 0m1.187s 00:07:06.143 sys 0m0.102s 00:07:06.143 12:50:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.143 12:50:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 END TEST thread_poller_perf 00:07:06.143 ************************************ 00:07:06.143 12:50:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.143 00:07:06.143 real 0m2.877s 00:07:06.143 user 0m2.476s 00:07:06.143 sys 0m0.405s 00:07:06.143 12:50:43 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.143 12:50:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 END TEST thread 00:07:06.143 ************************************ 00:07:06.143 12:50:43 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:06.143 12:50:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.143 12:50:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.143 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 ************************************ 00:07:06.143 START TEST accel 00:07:06.143 ************************************ 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:06.143 * Looking for test storage... 00:07:06.143 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:06.143 12:50:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:06.143 12:50:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:06.143 12:50:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.143 12:50:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3490922 00:07:06.143 12:50:43 accel -- accel/accel.sh@63 -- # waitforlisten 3490922 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@827 -- # '[' -z 3490922 ']' 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.143 12:50:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.143 12:50:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.143 12:50:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.143 12:50:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.143 12:50:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.143 12:50:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.143 12:50:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.143 12:50:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.143 12:50:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:06.143 12:50:43 accel -- accel/accel.sh@41 -- # jq -r . 00:07:06.143 [2024-05-15 12:50:43.929668] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:06.143 [2024-05-15 12:50:43.929722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490922 ] 00:07:06.143 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.143 [2024-05-15 12:50:44.001179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.402 [2024-05-15 12:50:44.090009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.970 12:50:44 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.970 12:50:44 accel -- common/autotest_common.sh@860 -- # return 0 00:07:06.970 12:50:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:06.970 12:50:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:06.970 12:50:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:06.970 12:50:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:06.970 12:50:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:06.970 12:50:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:06.970 12:50:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:06.970 12:50:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.970 12:50:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.970 12:50:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.970 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.970 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.970 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.971 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.971 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.971 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.971 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.971 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.971 12:50:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.971 12:50:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.971 12:50:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.971 12:50:44 accel -- accel/accel.sh@75 -- # killprocess 3490922 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@946 -- # '[' -z 3490922 ']' 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@950 -- # kill -0 3490922 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@951 -- # uname 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3490922 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3490922' 00:07:06.971 killing process with pid 3490922 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@965 -- # kill 3490922 00:07:06.971 12:50:44 accel -- common/autotest_common.sh@970 -- # wait 3490922 00:07:07.538 12:50:45 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:07.538 12:50:45 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.538 12:50:45 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:07.538 12:50:45 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:07.538 12:50:45 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.538 12:50:45 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:07.538 12:50:45 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.538 12:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.538 ************************************ 00:07:07.538 START TEST accel_missing_filename 00:07:07.538 ************************************ 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.538 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:07.538 12:50:45 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:07.538 [2024-05-15 12:50:45.405030] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:07.538 [2024-05-15 12:50:45.405103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491141 ] 00:07:07.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.797 [2024-05-15 12:50:45.477936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.797 [2024-05-15 12:50:45.569040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.797 [2024-05-15 12:50:45.617138] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.056 [2024-05-15 12:50:45.686830] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:08.056 A filename is required. 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.056 00:07:08.056 real 0m0.419s 00:07:08.056 user 0m0.306s 00:07:08.056 sys 0m0.147s 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.056 12:50:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:08.056 ************************************ 00:07:08.056 END TEST accel_missing_filename 00:07:08.056 ************************************ 00:07:08.056 12:50:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:08.056 12:50:45 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:08.056 12:50:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.056 12:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.056 ************************************ 00:07:08.056 START TEST accel_compress_verify 00:07:08.056 ************************************ 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.056 12:50:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:08.056 12:50:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:08.056 [2024-05-15 12:50:45.908299] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:08.056 [2024-05-15 12:50:45.908366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491327 ] 00:07:08.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.315 [2024-05-15 12:50:45.980533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.315 [2024-05-15 12:50:46.066716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.315 [2024-05-15 12:50:46.114399] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.315 [2024-05-15 12:50:46.183655] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:08.574 00:07:08.574 Compression does not support the verify option, aborting. 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.574 00:07:08.574 real 0m0.407s 00:07:08.574 user 0m0.295s 00:07:08.574 sys 0m0.151s 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.574 12:50:46 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:08.574 ************************************ 00:07:08.574 END TEST accel_compress_verify 00:07:08.574 ************************************ 00:07:08.574 12:50:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:08.574 12:50:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:08.574 12:50:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.574 12:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.574 ************************************ 00:07:08.574 START TEST accel_wrong_workload 00:07:08.574 ************************************ 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.574 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:08.574 12:50:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:08.574 Unsupported workload type: foobar 00:07:08.574 [2024-05-15 12:50:46.391054] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:08.574 accel_perf options: 00:07:08.574 [-h help message] 00:07:08.574 [-q queue depth per core] 00:07:08.574 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.574 [-T number of threads per core 00:07:08.574 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.574 [-t time in seconds] 00:07:08.574 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.574 [ dif_verify, , dif_generate, dif_generate_copy 00:07:08.574 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.574 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.574 [-S for crc32c workload, use this seed value (default 0) 00:07:08.574 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.574 [-f for fill workload, use this BYTE value (default 255) 00:07:08.574 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.574 [-y verify result if this switch is on] 00:07:08.574 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.574 Can be used to spread operations across a wider range of memory. 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.575 00:07:08.575 real 0m0.034s 00:07:08.575 user 0m0.022s 00:07:08.575 sys 0m0.011s 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.575 12:50:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:08.575 ************************************ 00:07:08.575 END TEST accel_wrong_workload 00:07:08.575 ************************************ 00:07:08.575 Error: writing output failed: Broken pipe 00:07:08.575 12:50:46 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.575 12:50:46 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:08.575 12:50:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.575 12:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.834 ************************************ 00:07:08.834 START TEST accel_negative_buffers 00:07:08.834 ************************************ 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:08.834 12:50:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:08.834 -x option must be non-negative. 00:07:08.834 [2024-05-15 12:50:46.494436] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:08.834 accel_perf options: 00:07:08.834 [-h help message] 00:07:08.834 [-q queue depth per core] 00:07:08.834 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:08.834 [-T number of threads per core 00:07:08.834 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:08.834 [-t time in seconds] 00:07:08.834 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:08.834 [ dif_verify, , dif_generate, dif_generate_copy 00:07:08.834 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:08.834 [-l for compress/decompress workloads, name of uncompressed input file 00:07:08.834 [-S for crc32c workload, use this seed value (default 0) 00:07:08.834 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:08.834 [-f for fill workload, use this BYTE value (default 255) 00:07:08.834 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:08.834 [-y verify result if this switch is on] 00:07:08.834 [-a tasks to allocate per core (default: same value as -q)] 00:07:08.834 Can be used to spread operations across a wider range of memory. 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.834 00:07:08.834 real 0m0.032s 00:07:08.834 user 0m0.016s 00:07:08.834 sys 0m0.016s 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.834 12:50:46 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:08.834 ************************************ 00:07:08.834 END TEST accel_negative_buffers 00:07:08.834 ************************************ 00:07:08.834 Error: writing output failed: Broken pipe 00:07:08.834 12:50:46 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:08.834 12:50:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:08.834 12:50:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.834 12:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.834 ************************************ 00:07:08.835 START TEST accel_crc32c 00:07:08.835 ************************************ 00:07:08.835 12:50:46 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:08.835 12:50:46 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:08.835 [2024-05-15 12:50:46.612083] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:08.835 [2024-05-15 12:50:46.612155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491399 ] 00:07:08.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.835 [2024-05-15 12:50:46.685874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.094 [2024-05-15 12:50:46.778449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.094 12:50:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:10.472 12:50:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.472 00:07:10.472 real 0m1.419s 00:07:10.472 user 0m1.281s 00:07:10.472 sys 0m0.142s 00:07:10.472 12:50:48 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.472 12:50:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 END TEST accel_crc32c 00:07:10.472 ************************************ 00:07:10.472 12:50:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:10.472 12:50:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:10.472 12:50:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.472 12:50:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 START TEST accel_crc32c_C2 00:07:10.472 ************************************ 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:10.472 [2024-05-15 12:50:48.108323] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:10.472 [2024-05-15 12:50:48.108380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491607 ] 00:07:10.472 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.472 [2024-05-15 12:50:48.178946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.472 [2024-05-15 12:50:48.264746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.472 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.473 12:50:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.933 00:07:11.933 real 0m1.407s 00:07:11.933 user 0m1.271s 00:07:11.933 sys 0m0.140s 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.933 12:50:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:11.933 ************************************ 00:07:11.933 END TEST accel_crc32c_C2 00:07:11.933 ************************************ 00:07:11.933 12:50:49 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:11.933 12:50:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:11.933 12:50:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.933 12:50:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.933 ************************************ 00:07:11.933 START TEST accel_copy 00:07:11.933 ************************************ 00:07:11.933 12:50:49 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:11.933 [2024-05-15 12:50:49.597440] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:11.933 [2024-05-15 12:50:49.597513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491807 ] 00:07:11.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.933 [2024-05-15 12:50:49.669822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.933 [2024-05-15 12:50:49.758841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:11.933 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.934 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.192 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.192 12:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.192 12:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.192 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.192 12:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:13.128 12:50:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.128 00:07:13.128 real 0m1.412s 00:07:13.128 user 0m1.267s 00:07:13.128 sys 0m0.149s 00:07:13.128 12:50:50 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.128 12:50:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.128 ************************************ 00:07:13.128 END TEST accel_copy 00:07:13.128 ************************************ 00:07:13.388 12:50:51 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.388 12:50:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:13.388 12:50:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.388 12:50:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.388 ************************************ 00:07:13.388 START TEST accel_fill 00:07:13.388 ************************************ 00:07:13.388 12:50:51 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:13.388 12:50:51 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:13.388 [2024-05-15 12:50:51.086251] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:13.388 [2024-05-15 12:50:51.086308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492013 ] 00:07:13.388 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.388 [2024-05-15 12:50:51.159366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.388 [2024-05-15 12:50:51.252046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.647 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.648 12:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.027 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:15.028 12:50:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.028 00:07:15.028 real 0m1.417s 00:07:15.028 user 0m1.276s 00:07:15.028 sys 0m0.145s 00:07:15.028 12:50:52 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.028 12:50:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:15.028 ************************************ 00:07:15.028 END TEST accel_fill 00:07:15.028 ************************************ 00:07:15.028 12:50:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:15.028 12:50:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:15.028 12:50:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.028 12:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.028 ************************************ 00:07:15.028 START TEST accel_copy_crc32c 00:07:15.028 ************************************ 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:15.028 [2024-05-15 12:50:52.582931] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:15.028 [2024-05-15 12:50:52.583006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492248 ] 00:07:15.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.028 [2024-05-15 12:50:52.656135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.028 [2024-05-15 12:50:52.746309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.028 12:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.407 00:07:16.407 real 0m1.418s 00:07:16.407 user 0m1.278s 00:07:16.407 sys 0m0.145s 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.407 12:50:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 ************************************ 00:07:16.407 END TEST accel_copy_crc32c 00:07:16.407 ************************************ 00:07:16.407 12:50:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:16.407 12:50:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:16.407 12:50:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.407 12:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 ************************************ 00:07:16.407 START TEST accel_copy_crc32c_C2 00:07:16.407 ************************************ 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:16.407 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:16.408 [2024-05-15 12:50:54.065262] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:16.408 [2024-05-15 12:50:54.065321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492495 ] 00:07:16.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.408 [2024-05-15 12:50:54.132033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.408 [2024-05-15 12:50:54.219647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 12:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.787 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.788 00:07:17.788 real 0m1.396s 00:07:17.788 user 0m1.273s 00:07:17.788 sys 0m0.129s 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.788 12:50:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:17.788 ************************************ 00:07:17.788 END TEST accel_copy_crc32c_C2 00:07:17.788 ************************************ 00:07:17.788 12:50:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:17.788 12:50:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:17.788 12:50:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.788 12:50:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.788 ************************************ 00:07:17.788 START TEST accel_dualcast 00:07:17.788 ************************************ 00:07:17.788 12:50:55 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:17.788 12:50:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:17.788 [2024-05-15 12:50:55.537700] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:17.788 [2024-05-15 12:50:55.537747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492738 ] 00:07:17.788 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.788 [2024-05-15 12:50:55.607976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.048 [2024-05-15 12:50:55.694652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.048 12:50:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:19.427 12:50:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.427 00:07:19.427 real 0m1.393s 00:07:19.427 user 0m1.265s 00:07:19.427 sys 0m0.133s 00:07:19.427 12:50:56 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.427 12:50:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:19.427 ************************************ 00:07:19.427 END TEST accel_dualcast 00:07:19.427 ************************************ 00:07:19.427 12:50:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:19.427 12:50:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:19.427 12:50:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.427 12:50:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.427 ************************************ 00:07:19.427 START TEST accel_compare 00:07:19.427 ************************************ 00:07:19.427 12:50:57 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:19.427 12:50:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:19.427 [2024-05-15 12:50:57.030342] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:19.427 [2024-05-15 12:50:57.030420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492977 ] 00:07:19.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.428 [2024-05-15 12:50:57.102458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.428 [2024-05-15 12:50:57.189693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.428 12:50:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.805 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:20.806 12:50:58 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.806 00:07:20.806 real 0m1.413s 00:07:20.806 user 0m1.275s 00:07:20.806 sys 0m0.141s 00:07:20.806 12:50:58 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.806 12:50:58 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:20.806 ************************************ 00:07:20.806 END TEST accel_compare 00:07:20.806 ************************************ 00:07:20.806 12:50:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:20.806 12:50:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:20.806 12:50:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.806 12:50:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.806 ************************************ 00:07:20.806 START TEST accel_xor 00:07:20.806 ************************************ 00:07:20.806 12:50:58 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.806 12:50:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.806 [2024-05-15 12:50:58.522891] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:20.806 [2024-05-15 12:50:58.522973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493190 ] 00:07:20.806 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.806 [2024-05-15 12:50:58.594547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.806 [2024-05-15 12:50:58.680464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.065 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.066 12:50:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.445 00:07:22.445 real 0m1.408s 00:07:22.445 user 0m1.268s 00:07:22.445 sys 0m0.144s 00:07:22.445 12:50:59 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.445 12:50:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:22.445 ************************************ 00:07:22.445 END TEST accel_xor 00:07:22.445 ************************************ 00:07:22.445 12:50:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:22.445 12:50:59 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.445 12:50:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.445 12:50:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.445 ************************************ 00:07:22.445 START TEST accel_xor 00:07:22.445 ************************************ 00:07:22.445 12:50:59 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:22.445 12:50:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:22.445 [2024-05-15 12:51:00.008374] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:22.445 [2024-05-15 12:51:00.008442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493396 ] 00:07:22.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.445 [2024-05-15 12:51:00.086448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.445 [2024-05-15 12:51:00.175965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.445 12:51:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.825 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.825 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.825 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.825 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.826 12:51:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.826 00:07:23.826 real 0m1.408s 00:07:23.826 user 0m1.276s 00:07:23.826 sys 0m0.135s 00:07:23.826 12:51:01 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.826 12:51:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.826 ************************************ 00:07:23.826 END TEST accel_xor 00:07:23.826 ************************************ 00:07:23.826 12:51:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:23.826 12:51:01 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:23.826 12:51:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.826 12:51:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.826 ************************************ 00:07:23.826 START TEST accel_dif_verify 00:07:23.826 ************************************ 00:07:23.826 12:51:01 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:23.826 [2024-05-15 12:51:01.492706] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:23.826 [2024-05-15 12:51:01.492763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493700 ] 00:07:23.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.826 [2024-05-15 12:51:01.565133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.826 [2024-05-15 12:51:01.649575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.826 12:51:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:25.204 12:51:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.204 00:07:25.204 real 0m1.391s 00:07:25.204 user 0m1.265s 00:07:25.204 sys 0m0.130s 00:07:25.204 12:51:02 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.204 12:51:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.204 ************************************ 00:07:25.204 END TEST accel_dif_verify 00:07:25.204 ************************************ 00:07:25.204 12:51:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:25.204 12:51:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:25.204 12:51:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.204 12:51:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.204 ************************************ 00:07:25.204 START TEST accel_dif_generate 00:07:25.204 ************************************ 00:07:25.204 12:51:02 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:25.204 12:51:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:25.204 12:51:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:25.205 12:51:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:25.205 [2024-05-15 12:51:02.964463] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:25.205 [2024-05-15 12:51:02.964519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493924 ] 00:07:25.205 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.205 [2024-05-15 12:51:03.037020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.463 [2024-05-15 12:51:03.125128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.463 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:25.464 12:51:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:26.842 12:51:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.842 00:07:26.842 real 0m1.411s 00:07:26.842 user 0m1.283s 00:07:26.842 sys 0m0.132s 00:07:26.842 12:51:04 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.843 12:51:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:26.843 ************************************ 00:07:26.843 END TEST accel_dif_generate 00:07:26.843 ************************************ 00:07:26.843 12:51:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:26.843 12:51:04 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:26.843 12:51:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.843 12:51:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.843 ************************************ 00:07:26.843 START TEST accel_dif_generate_copy 00:07:26.843 ************************************ 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:26.843 [2024-05-15 12:51:04.457183] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:26.843 [2024-05-15 12:51:04.457241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494266 ] 00:07:26.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.843 [2024-05-15 12:51:04.528517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.843 [2024-05-15 12:51:04.614320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.843 12:51:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.220 00:07:28.220 real 0m1.407s 00:07:28.220 user 0m1.275s 00:07:28.220 sys 0m0.136s 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.220 12:51:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.220 ************************************ 00:07:28.220 END TEST accel_dif_generate_copy 00:07:28.220 ************************************ 00:07:28.220 12:51:05 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:28.220 12:51:05 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.220 12:51:05 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:28.220 12:51:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.220 12:51:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.220 ************************************ 00:07:28.220 START TEST accel_comp 00:07:28.220 ************************************ 00:07:28.220 12:51:05 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:28.220 12:51:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:28.220 [2024-05-15 12:51:05.939749] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:28.220 [2024-05-15 12:51:05.939806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494692 ] 00:07:28.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.220 [2024-05-15 12:51:06.009837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.220 [2024-05-15 12:51:06.092155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:28.479 12:51:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:29.859 12:51:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.859 00:07:29.859 real 0m1.398s 00:07:29.859 user 0m1.272s 00:07:29.859 sys 0m0.132s 00:07:29.859 12:51:07 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.859 12:51:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:29.859 ************************************ 00:07:29.859 END TEST accel_comp 00:07:29.859 ************************************ 00:07:29.859 12:51:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.859 12:51:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:29.859 12:51:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.859 12:51:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.859 ************************************ 00:07:29.859 START TEST accel_decomp 00:07:29.859 ************************************ 00:07:29.859 12:51:07 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:29.859 [2024-05-15 12:51:07.419380] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:29.859 [2024-05-15 12:51:07.419456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494905 ] 00:07:29.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.859 [2024-05-15 12:51:07.491373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.859 [2024-05-15 12:51:07.576737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.859 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.860 12:51:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.239 12:51:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.239 00:07:31.239 real 0m1.409s 00:07:31.239 user 0m1.274s 00:07:31.239 sys 0m0.140s 00:07:31.239 12:51:08 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.239 12:51:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:31.239 ************************************ 00:07:31.239 END TEST accel_decomp 00:07:31.239 ************************************ 00:07:31.239 12:51:08 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.239 12:51:08 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:31.239 12:51:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.239 12:51:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.239 ************************************ 00:07:31.239 START TEST accel_decmop_full 00:07:31.239 ************************************ 00:07:31.239 12:51:08 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:31.239 12:51:08 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:31.239 [2024-05-15 12:51:08.907386] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:31.239 [2024-05-15 12:51:08.907444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495109 ] 00:07:31.239 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.239 [2024-05-15 12:51:08.979262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.239 [2024-05-15 12:51:09.063964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.239 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.240 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.499 12:51:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.436 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.437 12:51:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.437 00:07:32.437 real 0m1.402s 00:07:32.437 user 0m1.269s 00:07:32.437 sys 0m0.137s 00:07:32.437 12:51:10 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.437 12:51:10 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:32.437 ************************************ 00:07:32.437 END TEST accel_decmop_full 00:07:32.437 ************************************ 00:07:32.697 12:51:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.697 12:51:10 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:32.697 12:51:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.697 12:51:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.697 ************************************ 00:07:32.697 START TEST accel_decomp_mcore 00:07:32.697 ************************************ 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:32.697 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:32.697 [2024-05-15 12:51:10.378375] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:32.697 [2024-05-15 12:51:10.378424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495319 ] 00:07:32.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.697 [2024-05-15 12:51:10.450694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.697 [2024-05-15 12:51:10.546876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.697 [2024-05-15 12:51:10.546963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.697 [2024-05-15 12:51:10.547043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.697 [2024-05-15 12:51:10.547045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.956 12:51:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.334 00:07:34.334 real 0m1.423s 00:07:34.334 user 0m4.679s 00:07:34.334 sys 0m0.138s 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.334 12:51:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:34.334 ************************************ 00:07:34.334 END TEST accel_decomp_mcore 00:07:34.334 ************************************ 00:07:34.334 12:51:11 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.334 12:51:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:34.334 12:51:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.334 12:51:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.334 ************************************ 00:07:34.334 START TEST accel_decomp_full_mcore 00:07:34.334 ************************************ 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.334 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:34.335 12:51:11 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:34.335 [2024-05-15 12:51:11.906632] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:34.335 [2024-05-15 12:51:11.906700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495585 ] 00:07:34.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.335 [2024-05-15 12:51:11.980801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.335 [2024-05-15 12:51:12.071191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.335 [2024-05-15 12:51:12.071276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.335 [2024-05-15 12:51:12.071354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.335 [2024-05-15 12:51:12.071355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.335 12:51:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.716 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.717 00:07:35.717 real 0m1.444s 00:07:35.717 user 0m4.711s 00:07:35.717 sys 0m0.154s 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.717 12:51:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:35.717 ************************************ 00:07:35.717 END TEST accel_decomp_full_mcore 00:07:35.717 ************************************ 00:07:35.717 12:51:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.717 12:51:13 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:35.717 12:51:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.717 12:51:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.717 ************************************ 00:07:35.717 START TEST accel_decomp_mthread 00:07:35.717 ************************************ 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:35.717 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:35.717 [2024-05-15 12:51:13.443588] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:35.717 [2024-05-15 12:51:13.443644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495848 ] 00:07:35.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.717 [2024-05-15 12:51:13.515782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.069 [2024-05-15 12:51:13.602653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.069 12:51:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.006 00:07:37.006 real 0m1.425s 00:07:37.006 user 0m1.292s 00:07:37.006 sys 0m0.147s 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.006 12:51:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:37.006 ************************************ 00:07:37.006 END TEST accel_decomp_mthread 00:07:37.006 ************************************ 00:07:37.006 12:51:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.006 12:51:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:37.006 12:51:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.006 12:51:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.266 ************************************ 00:07:37.266 START TEST accel_decomp_full_mthread 00:07:37.266 ************************************ 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:37.266 12:51:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:37.266 [2024-05-15 12:51:14.947241] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:37.266 [2024-05-15 12:51:14.947286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496092 ] 00:07:37.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.266 [2024-05-15 12:51:15.017725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.266 [2024-05-15 12:51:15.103316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.526 12:51:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.475 00:07:38.475 real 0m1.422s 00:07:38.475 user 0m1.298s 00:07:38.475 sys 0m0.137s 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.475 12:51:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:38.475 ************************************ 00:07:38.475 END TEST accel_decomp_full_mthread 00:07:38.475 ************************************ 00:07:38.734 12:51:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:38.734 12:51:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:38.734 12:51:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:38.734 12:51:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.734 12:51:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.734 12:51:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.734 12:51:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.734 12:51:16 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:38.734 12:51:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.734 12:51:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:38.734 12:51:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.734 12:51:16 accel -- accel/accel.sh@41 -- # jq -r . 00:07:38.734 12:51:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.734 ************************************ 00:07:38.734 START TEST accel_dif_functional_tests 00:07:38.734 ************************************ 00:07:38.734 12:51:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:38.734 [2024-05-15 12:51:16.475589] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:38.734 [2024-05-15 12:51:16.475637] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496295 ] 00:07:38.734 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.734 [2024-05-15 12:51:16.543795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.993 [2024-05-15 12:51:16.630438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.993 [2024-05-15 12:51:16.630525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.993 [2024-05-15 12:51:16.630528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.993 00:07:38.993 00:07:38.993 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.993 http://cunit.sourceforge.net/ 00:07:38.993 00:07:38.993 00:07:38.993 Suite: accel_dif 00:07:38.993 Test: verify: DIF generated, GUARD check ...passed 00:07:38.993 Test: verify: DIF generated, APPTAG check ...passed 00:07:38.993 Test: verify: DIF generated, REFTAG check ...passed 00:07:38.993 Test: verify: DIF not generated, GUARD check ...[2024-05-15 12:51:16.710236] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.993 [2024-05-15 12:51:16.710284] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.993 passed 00:07:38.993 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 12:51:16.710331] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.993 [2024-05-15 12:51:16.710348] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.993 passed 00:07:38.993 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 12:51:16.710367] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.993 [2024-05-15 12:51:16.710385] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.993 passed 00:07:38.993 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:38.993 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 12:51:16.710428] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:38.993 passed 00:07:38.993 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:38.993 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:38.993 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:38.993 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 12:51:16.710533] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:38.993 passed 00:07:38.993 Test: generate copy: DIF generated, GUARD check ...passed 00:07:38.993 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:38.993 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:38.993 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:38.993 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:38.993 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:38.993 Test: generate copy: iovecs-len validate ...[2024-05-15 12:51:16.710704] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:38.993 passed 00:07:38.993 Test: generate copy: buffer alignment validate ...passed 00:07:38.993 00:07:38.993 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.993 suites 1 1 n/a 0 0 00:07:38.993 tests 20 20 20 0 0 00:07:38.993 asserts 204 204 204 0 n/a 00:07:38.993 00:07:38.993 Elapsed time = 0.000 seconds 00:07:39.252 00:07:39.252 real 0m0.495s 00:07:39.252 user 0m0.690s 00:07:39.252 sys 0m0.173s 00:07:39.252 12:51:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.252 12:51:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:39.252 ************************************ 00:07:39.252 END TEST accel_dif_functional_tests 00:07:39.252 ************************************ 00:07:39.252 00:07:39.252 real 0m33.197s 00:07:39.252 user 0m36.000s 00:07:39.252 sys 0m5.184s 00:07:39.252 12:51:16 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.252 12:51:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.252 ************************************ 00:07:39.252 END TEST accel 00:07:39.252 ************************************ 00:07:39.252 12:51:17 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.252 12:51:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.252 12:51:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.252 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:07:39.252 ************************************ 00:07:39.252 START TEST accel_rpc 00:07:39.252 ************************************ 00:07:39.252 12:51:17 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:39.252 * Looking for test storage... 00:07:39.511 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:39.511 12:51:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.511 12:51:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3496367 00:07:39.511 12:51:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3496367 00:07:39.511 12:51:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3496367 ']' 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.511 12:51:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.511 [2024-05-15 12:51:17.192082] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:39.511 [2024-05-15 12:51:17.192142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496367 ] 00:07:39.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.511 [2024-05-15 12:51:17.262607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.511 [2024-05-15 12:51:17.351964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.449 12:51:17 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.449 12:51:17 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.449 12:51:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.449 12:51:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.449 12:51:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.449 12:51:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.449 12:51:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.449 12:51:17 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.449 12:51:17 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.449 12:51:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 ************************************ 00:07:40.449 START TEST accel_assign_opcode 00:07:40.449 ************************************ 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 [2024-05-15 12:51:18.038054] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 [2024-05-15 12:51:18.050089] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.449 software 00:07:40.449 00:07:40.449 real 0m0.270s 00:07:40.449 user 0m0.047s 00:07:40.449 sys 0m0.015s 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.449 12:51:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.449 ************************************ 00:07:40.449 END TEST accel_assign_opcode 00:07:40.449 ************************************ 00:07:40.708 12:51:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3496367 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3496367 ']' 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3496367 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496367 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496367' 00:07:40.708 killing process with pid 3496367 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@965 -- # kill 3496367 00:07:40.708 12:51:18 accel_rpc -- common/autotest_common.sh@970 -- # wait 3496367 00:07:40.967 00:07:40.967 real 0m1.714s 00:07:40.967 user 0m1.713s 00:07:40.967 sys 0m0.520s 00:07:40.967 12:51:18 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.967 12:51:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 ************************************ 00:07:40.967 END TEST accel_rpc 00:07:40.967 ************************************ 00:07:40.967 12:51:18 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:40.967 12:51:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:40.967 12:51:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.967 12:51:18 -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 ************************************ 00:07:41.227 START TEST app_cmdline 00:07:41.227 ************************************ 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:41.227 * Looking for test storage... 00:07:41.227 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:41.227 12:51:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.227 12:51:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3496781 00:07:41.227 12:51:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.227 12:51:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3496781 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3496781 ']' 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.227 12:51:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.227 [2024-05-15 12:51:19.010710] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:07:41.227 [2024-05-15 12:51:19.010780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496781 ] 00:07:41.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.227 [2024-05-15 12:51:19.082600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.486 [2024-05-15 12:51:19.174067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.056 12:51:19 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.056 12:51:19 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:42.056 12:51:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:42.315 { 00:07:42.315 "version": "SPDK v24.05-pre git sha1 01137ce67", 00:07:42.315 "fields": { 00:07:42.315 "major": 24, 00:07:42.315 "minor": 5, 00:07:42.315 "patch": 0, 00:07:42.315 "suffix": "-pre", 00:07:42.315 "commit": "01137ce67" 00:07:42.315 } 00:07:42.315 } 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.315 12:51:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.315 12:51:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.315 12:51:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:42.315 12:51:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.315 12:51:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.315 12:51:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.315 12:51:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.315 12:51:20 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.575 request: 00:07:42.575 { 00:07:42.575 "method": "env_dpdk_get_mem_stats", 00:07:42.575 "req_id": 1 00:07:42.575 } 00:07:42.575 Got JSON-RPC error response 00:07:42.575 response: 00:07:42.575 { 00:07:42.575 "code": -32601, 00:07:42.575 "message": "Method not found" 00:07:42.575 } 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.575 12:51:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3496781 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3496781 ']' 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3496781 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496781 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496781' 00:07:42.575 killing process with pid 3496781 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@965 -- # kill 3496781 00:07:42.575 12:51:20 app_cmdline -- common/autotest_common.sh@970 -- # wait 3496781 00:07:42.834 00:07:42.834 real 0m1.769s 00:07:42.834 user 0m2.012s 00:07:42.834 sys 0m0.540s 00:07:42.834 12:51:20 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.834 12:51:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.834 ************************************ 00:07:42.834 END TEST app_cmdline 00:07:42.834 ************************************ 00:07:42.834 12:51:20 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:42.834 12:51:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.834 12:51:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.834 12:51:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.094 ************************************ 00:07:43.094 START TEST version 00:07:43.094 ************************************ 00:07:43.094 12:51:20 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:43.094 * Looking for test storage... 00:07:43.094 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:43.094 12:51:20 version -- app/version.sh@17 -- # get_header_version major 00:07:43.094 12:51:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # cut -f2 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.094 12:51:20 version -- app/version.sh@17 -- # major=24 00:07:43.094 12:51:20 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.094 12:51:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # cut -f2 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.094 12:51:20 version -- app/version.sh@18 -- # minor=5 00:07:43.094 12:51:20 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.094 12:51:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # cut -f2 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.094 12:51:20 version -- app/version.sh@19 -- # patch=0 00:07:43.094 12:51:20 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.094 12:51:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # cut -f2 00:07:43.094 12:51:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.094 12:51:20 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.094 12:51:20 version -- app/version.sh@22 -- # version=24.5 00:07:43.094 12:51:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.094 12:51:20 version -- app/version.sh@28 -- # version=24.5rc0 00:07:43.094 12:51:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:43.094 12:51:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.094 12:51:20 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:43.094 12:51:20 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:43.094 00:07:43.094 real 0m0.181s 00:07:43.094 user 0m0.091s 00:07:43.094 sys 0m0.136s 00:07:43.094 12:51:20 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.094 12:51:20 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.094 ************************************ 00:07:43.094 END TEST version 00:07:43.094 ************************************ 00:07:43.094 12:51:20 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:43.094 12:51:20 -- spdk/autotest.sh@194 -- # uname -s 00:07:43.094 12:51:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:43.094 12:51:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:43.094 12:51:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:43.094 12:51:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:43.094 12:51:20 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:43.094 12:51:20 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:43.094 12:51:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.094 12:51:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.354 12:51:20 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:43.354 12:51:20 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:43.354 12:51:20 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:43.354 12:51:20 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:43.354 12:51:20 -- spdk/autotest.sh@279 -- # '[' rdma = rdma ']' 00:07:43.354 12:51:20 -- spdk/autotest.sh@280 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:43.354 12:51:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.354 12:51:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.354 12:51:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.354 ************************************ 00:07:43.354 START TEST nvmf_rdma 00:07:43.354 ************************************ 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:43.354 * Looking for test storage... 00:07:43.354 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:43.354 12:51:21 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.354 12:51:21 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.354 12:51:21 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.354 12:51:21 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 12:51:21 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 12:51:21 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 12:51:21 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:43.354 12:51:21 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:43.354 12:51:21 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.354 12:51:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:43.354 ************************************ 00:07:43.354 START TEST nvmf_example 00:07:43.354 ************************************ 00:07:43.354 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:43.614 * Looking for test storage... 00:07:43.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.614 12:51:21 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.615 12:51:21 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:50.183 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:50.183 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:50.183 Found net devices under 0000:18:00.0: mlx_0_0 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:50.183 Found net devices under 0000:18:00.1: mlx_0_1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:50.183 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:50.183 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:07:50.183 altname enp24s0f0np0 00:07:50.183 altname ens785f0np0 00:07:50.183 inet 192.168.100.8/24 scope global mlx_0_0 00:07:50.183 valid_lft forever preferred_lft forever 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:50.183 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:50.184 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:50.184 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:07:50.184 altname enp24s0f1np1 00:07:50.184 altname ens785f1np1 00:07:50.184 inet 192.168.100.9/24 scope global mlx_0_1 00:07:50.184 valid_lft forever preferred_lft forever 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:50.184 192.168.100.9' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:50.184 192.168.100.9' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:50.184 192.168.100.9' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3499990 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3499990 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3499990 ']' 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.184 12:51:27 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.443 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:50.702 12:51:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.702 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.914 Initializing NVMe Controllers 00:08:02.914 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.914 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:02.914 Initialization complete. Launching workers. 00:08:02.914 ======================================================== 00:08:02.914 Latency(us) 00:08:02.914 Device Information : IOPS MiB/s Average min max 00:08:02.914 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24314.00 94.98 2631.85 628.26 14018.55 00:08:02.914 ======================================================== 00:08:02.914 Total : 24314.00 94.98 2631.85 628.26 14018.55 00:08:02.914 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:02.914 rmmod nvme_rdma 00:08:02.914 rmmod nvme_fabrics 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3499990 ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3499990 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3499990 ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3499990 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3499990 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3499990' 00:08:02.914 killing process with pid 3499990 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # kill 3499990 00:08:02.914 12:51:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@970 -- # wait 3499990 00:08:02.914 [2024-05-15 12:51:39.967492] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:02.914 nvmf threads initialize successfully 00:08:02.914 bdev subsystem init successfully 00:08:02.914 created a nvmf target service 00:08:02.914 create targets's poll groups done 00:08:02.914 all subsystems of target started 00:08:02.914 nvmf target is running 00:08:02.914 all subsystems of target stopped 00:08:02.914 destroy targets's poll groups done 00:08:02.914 destroyed the nvmf target service 00:08:02.914 bdev subsystem finish successfully 00:08:02.914 nvmf threads destroy successfully 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:02.914 00:08:02.914 real 0m18.990s 00:08:02.914 user 0m52.256s 00:08:02.914 sys 0m5.048s 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.914 12:51:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:02.914 ************************************ 00:08:02.914 END TEST nvmf_example 00:08:02.914 ************************************ 00:08:02.915 12:51:40 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:02.915 12:51:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.915 12:51:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.915 12:51:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:02.915 ************************************ 00:08:02.915 START TEST nvmf_filesystem 00:08:02.915 ************************************ 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:02.915 * Looking for test storage... 00:08:02.915 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:02.915 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:02.915 #define SPDK_CONFIG_H 00:08:02.915 #define SPDK_CONFIG_APPS 1 00:08:02.915 #define SPDK_CONFIG_ARCH native 00:08:02.915 #undef SPDK_CONFIG_ASAN 00:08:02.915 #undef SPDK_CONFIG_AVAHI 00:08:02.915 #undef SPDK_CONFIG_CET 00:08:02.916 #define SPDK_CONFIG_COVERAGE 1 00:08:02.916 #define SPDK_CONFIG_CROSS_PREFIX 00:08:02.916 #undef SPDK_CONFIG_CRYPTO 00:08:02.916 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:02.916 #undef SPDK_CONFIG_CUSTOMOCF 00:08:02.916 #undef SPDK_CONFIG_DAOS 00:08:02.916 #define SPDK_CONFIG_DAOS_DIR 00:08:02.916 #define SPDK_CONFIG_DEBUG 1 00:08:02.916 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:02.916 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:02.916 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:02.916 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:02.916 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:02.916 #undef SPDK_CONFIG_DPDK_UADK 00:08:02.916 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:02.916 #define SPDK_CONFIG_EXAMPLES 1 00:08:02.916 #undef SPDK_CONFIG_FC 00:08:02.916 #define SPDK_CONFIG_FC_PATH 00:08:02.916 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:02.916 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:02.916 #undef SPDK_CONFIG_FUSE 00:08:02.916 #undef SPDK_CONFIG_FUZZER 00:08:02.916 #define SPDK_CONFIG_FUZZER_LIB 00:08:02.916 #undef SPDK_CONFIG_GOLANG 00:08:02.916 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:02.916 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:02.916 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:02.916 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:02.916 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:02.916 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:02.916 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:02.916 #define SPDK_CONFIG_IDXD 1 00:08:02.916 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:02.916 #undef SPDK_CONFIG_IPSEC_MB 00:08:02.916 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:02.916 #define SPDK_CONFIG_ISAL 1 00:08:02.916 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:02.916 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:02.916 #define SPDK_CONFIG_LIBDIR 00:08:02.916 #undef SPDK_CONFIG_LTO 00:08:02.916 #define SPDK_CONFIG_MAX_LCORES 00:08:02.916 #define SPDK_CONFIG_NVME_CUSE 1 00:08:02.916 #undef SPDK_CONFIG_OCF 00:08:02.916 #define SPDK_CONFIG_OCF_PATH 00:08:02.916 #define SPDK_CONFIG_OPENSSL_PATH 00:08:02.916 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:02.916 #define SPDK_CONFIG_PGO_DIR 00:08:02.916 #undef SPDK_CONFIG_PGO_USE 00:08:02.916 #define SPDK_CONFIG_PREFIX /usr/local 00:08:02.916 #undef SPDK_CONFIG_RAID5F 00:08:02.916 #undef SPDK_CONFIG_RBD 00:08:02.916 #define SPDK_CONFIG_RDMA 1 00:08:02.916 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:02.916 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:02.916 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:02.916 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:02.916 #define SPDK_CONFIG_SHARED 1 00:08:02.916 #undef SPDK_CONFIG_SMA 00:08:02.916 #define SPDK_CONFIG_TESTS 1 00:08:02.916 #undef SPDK_CONFIG_TSAN 00:08:02.916 #define SPDK_CONFIG_UBLK 1 00:08:02.916 #define SPDK_CONFIG_UBSAN 1 00:08:02.916 #undef SPDK_CONFIG_UNIT_TESTS 00:08:02.916 #undef SPDK_CONFIG_URING 00:08:02.916 #define SPDK_CONFIG_URING_PATH 00:08:02.916 #undef SPDK_CONFIG_URING_ZNS 00:08:02.916 #undef SPDK_CONFIG_USDT 00:08:02.916 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:02.916 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:02.916 #undef SPDK_CONFIG_VFIO_USER 00:08:02.916 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:02.916 #define SPDK_CONFIG_VHOST 1 00:08:02.916 #define SPDK_CONFIG_VIRTIO 1 00:08:02.916 #undef SPDK_CONFIG_VTUNE 00:08:02.916 #define SPDK_CONFIG_VTUNE_DIR 00:08:02.916 #define SPDK_CONFIG_WERROR 1 00:08:02.916 #define SPDK_CONFIG_WPDK_DIR 00:08:02.916 #undef SPDK_CONFIG_XNVME 00:08:02.916 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:02.916 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # : rdma 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # : mlx5 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.917 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3501724 ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3501724 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.EUjijU 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EUjijU/tests/target /tmp/spdk.EUjijU 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=966955008 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4317474816 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=84882464768 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=94508605440 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9626140672 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=47250927616 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=18892554240 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=18901721088 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9166848 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=47253860352 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=442368 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=9450856448 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=9450860544 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:02.918 * Looking for test storage... 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:02.918 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=84882464768 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11840733184 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.919 12:51:40 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:09.481 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:09.481 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.481 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:09.482 Found net devices under 0000:18:00.0: mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:09.482 Found net devices under 0000:18:00.1: mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:09.482 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:09.482 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:09.482 altname enp24s0f0np0 00:08:09.482 altname ens785f0np0 00:08:09.482 inet 192.168.100.8/24 scope global mlx_0_0 00:08:09.482 valid_lft forever preferred_lft forever 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:09.482 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:09.482 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:09.482 altname enp24s0f1np1 00:08:09.482 altname ens785f1np1 00:08:09.482 inet 192.168.100.9/24 scope global mlx_0_1 00:08:09.482 valid_lft forever preferred_lft forever 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:09.482 192.168.100.9' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:09.482 192.168.100.9' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:09.482 192.168.100.9' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.482 12:51:46 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.482 ************************************ 00:08:09.482 START TEST nvmf_filesystem_no_in_capsule 00:08:09.482 ************************************ 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3504460 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3504460 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3504460 ']' 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.483 12:51:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.483 [2024-05-15 12:51:46.581394] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:08:09.483 [2024-05-15 12:51:46.581449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.483 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.483 [2024-05-15 12:51:46.655357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.483 [2024-05-15 12:51:46.740605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.483 [2024-05-15 12:51:46.740651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.483 [2024-05-15 12:51:46.740661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.483 [2024-05-15 12:51:46.740669] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.483 [2024-05-15 12:51:46.740677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.483 [2024-05-15 12:51:46.740730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.483 [2024-05-15 12:51:46.740816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.483 [2024-05-15 12:51:46.740897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.483 [2024-05-15 12:51:46.740899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.742 [2024-05-15 12:51:47.455030] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:09.742 [2024-05-15 12:51:47.476969] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13680f0/0x136c5e0) succeed. 00:08:09.742 [2024-05-15 12:51:47.487506] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1369730/0x13adc70) succeed. 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.742 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.743 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.743 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.002 Malloc1 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.002 [2024-05-15 12:51:47.755406] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:10.002 [2024-05-15 12:51:47.755827] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:10.002 { 00:08:10.002 "name": "Malloc1", 00:08:10.002 "aliases": [ 00:08:10.002 "63e37350-d258-4140-9c36-ff556a81cc77" 00:08:10.002 ], 00:08:10.002 "product_name": "Malloc disk", 00:08:10.002 "block_size": 512, 00:08:10.002 "num_blocks": 1048576, 00:08:10.002 "uuid": "63e37350-d258-4140-9c36-ff556a81cc77", 00:08:10.002 "assigned_rate_limits": { 00:08:10.002 "rw_ios_per_sec": 0, 00:08:10.002 "rw_mbytes_per_sec": 0, 00:08:10.002 "r_mbytes_per_sec": 0, 00:08:10.002 "w_mbytes_per_sec": 0 00:08:10.002 }, 00:08:10.002 "claimed": true, 00:08:10.002 "claim_type": "exclusive_write", 00:08:10.002 "zoned": false, 00:08:10.002 "supported_io_types": { 00:08:10.002 "read": true, 00:08:10.002 "write": true, 00:08:10.002 "unmap": true, 00:08:10.002 "write_zeroes": true, 00:08:10.002 "flush": true, 00:08:10.002 "reset": true, 00:08:10.002 "compare": false, 00:08:10.002 "compare_and_write": false, 00:08:10.002 "abort": true, 00:08:10.002 "nvme_admin": false, 00:08:10.002 "nvme_io": false 00:08:10.002 }, 00:08:10.002 "memory_domains": [ 00:08:10.002 { 00:08:10.002 "dma_device_id": "system", 00:08:10.002 "dma_device_type": 1 00:08:10.002 }, 00:08:10.002 { 00:08:10.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.002 "dma_device_type": 2 00:08:10.002 } 00:08:10.002 ], 00:08:10.002 "driver_specific": {} 00:08:10.002 } 00:08:10.002 ]' 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:10.002 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:10.003 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:10.003 12:51:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:11.380 12:51:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.380 12:51:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:11.380 12:51:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.380 12:51:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:11.380 12:51:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.285 12:51:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.285 12:51:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:14.222 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:14.222 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.222 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:14.222 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.222 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.481 ************************************ 00:08:14.481 START TEST filesystem_ext4 00:08:14.481 ************************************ 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:14.481 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.482 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.482 Discarding device blocks: 0/522240 done 00:08:14.482 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.482 Filesystem UUID: 09c3d562-692a-4f21-86a5-c7abada80103 00:08:14.482 Superblock backups stored on blocks: 00:08:14.482 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.482 00:08:14.482 Allocating group tables: 0/64 done 00:08:14.482 Writing inode tables: 0/64 done 00:08:14.482 Creating journal (8192 blocks): done 00:08:14.482 Writing superblocks and filesystem accounting information: 0/64 done 00:08:14.482 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3504460 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.482 00:08:14.482 real 0m0.196s 00:08:14.482 user 0m0.031s 00:08:14.482 sys 0m0.071s 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.482 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:14.482 ************************************ 00:08:14.482 END TEST filesystem_ext4 00:08:14.482 ************************************ 00:08:14.741 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.741 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.742 ************************************ 00:08:14.742 START TEST filesystem_btrfs 00:08:14.742 ************************************ 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:14.742 btrfs-progs v6.6.2 00:08:14.742 See https://btrfs.readthedocs.io for more information. 00:08:14.742 00:08:14.742 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:14.742 NOTE: several default settings have changed in version 5.15, please make sure 00:08:14.742 this does not affect your deployments: 00:08:14.742 - DUP for metadata (-m dup) 00:08:14.742 - enabled no-holes (-O no-holes) 00:08:14.742 - enabled free-space-tree (-R free-space-tree) 00:08:14.742 00:08:14.742 Label: (null) 00:08:14.742 UUID: 29aa99bd-1ce5-4e73-b290-9425a2d35338 00:08:14.742 Node size: 16384 00:08:14.742 Sector size: 4096 00:08:14.742 Filesystem size: 510.00MiB 00:08:14.742 Block group profiles: 00:08:14.742 Data: single 8.00MiB 00:08:14.742 Metadata: DUP 32.00MiB 00:08:14.742 System: DUP 8.00MiB 00:08:14.742 SSD detected: yes 00:08:14.742 Zoned device: no 00:08:14.742 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:14.742 Runtime features: free-space-tree 00:08:14.742 Checksum: crc32c 00:08:14.742 Number of devices: 1 00:08:14.742 Devices: 00:08:14.742 ID SIZE PATH 00:08:14.742 1 510.00MiB /dev/nvme0n1p1 00:08:14.742 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:14.742 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3504460 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.001 00:08:15.001 real 0m0.272s 00:08:15.001 user 0m0.032s 00:08:15.001 sys 0m0.135s 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:15.001 ************************************ 00:08:15.001 END TEST filesystem_btrfs 00:08:15.001 ************************************ 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.001 ************************************ 00:08:15.001 START TEST filesystem_xfs 00:08:15.001 ************************************ 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.001 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:15.002 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:15.324 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:15.324 = sectsz=512 attr=2, projid32bit=1 00:08:15.324 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:15.324 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:15.324 data = bsize=4096 blocks=130560, imaxpct=25 00:08:15.324 = sunit=0 swidth=0 blks 00:08:15.324 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:15.324 log =internal log bsize=4096 blocks=16384, version=2 00:08:15.324 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:15.324 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:15.324 Discarding blocks...Done. 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3504460 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.324 12:51:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.324 00:08:15.324 real 0m0.218s 00:08:15.324 user 0m0.022s 00:08:15.324 sys 0m0.083s 00:08:15.324 12:51:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.324 12:51:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:15.324 ************************************ 00:08:15.324 END TEST filesystem_xfs 00:08:15.324 ************************************ 00:08:15.324 12:51:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:15.324 12:51:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:15.324 12:51:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3504460 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3504460 ']' 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3504460 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3504460 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3504460' 00:08:16.259 killing process with pid 3504460 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3504460 00:08:16.259 [2024-05-15 12:51:54.132388] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:16.259 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3504460 00:08:16.517 [2024-05-15 12:51:54.191064] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.776 00:08:16.776 real 0m8.064s 00:08:16.776 user 0m31.297s 00:08:16.776 sys 0m1.265s 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.776 ************************************ 00:08:16.776 END TEST nvmf_filesystem_no_in_capsule 00:08:16.776 ************************************ 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.776 12:51:54 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.034 ************************************ 00:08:17.034 START TEST nvmf_filesystem_in_capsule 00:08:17.034 ************************************ 00:08:17.034 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:17.034 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3505746 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3505746 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3505746 ']' 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.035 12:51:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.035 [2024-05-15 12:51:54.744121] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:08:17.035 [2024-05-15 12:51:54.744182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.035 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.035 [2024-05-15 12:51:54.817143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.035 [2024-05-15 12:51:54.910206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.035 [2024-05-15 12:51:54.910246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.035 [2024-05-15 12:51:54.910256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.035 [2024-05-15 12:51:54.910266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.035 [2024-05-15 12:51:54.910273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.035 [2024-05-15 12:51:54.910330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.035 [2024-05-15 12:51:54.910420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.035 [2024-05-15 12:51:54.910447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.035 [2024-05-15 12:51:54.910449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.970 [2024-05-15 12:51:55.639884] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd6f0f0/0xd735e0) succeed. 00:08:17.970 [2024-05-15 12:51:55.650451] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd70730/0xdb4c70) succeed. 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.970 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.231 Malloc1 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.231 [2024-05-15 12:51:55.923277] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:18.231 [2024-05-15 12:51:55.923666] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:18.231 { 00:08:18.231 "name": "Malloc1", 00:08:18.231 "aliases": [ 00:08:18.231 "d59d37a2-71ef-4a05-8555-a26123193e9c" 00:08:18.231 ], 00:08:18.231 "product_name": "Malloc disk", 00:08:18.231 "block_size": 512, 00:08:18.231 "num_blocks": 1048576, 00:08:18.231 "uuid": "d59d37a2-71ef-4a05-8555-a26123193e9c", 00:08:18.231 "assigned_rate_limits": { 00:08:18.231 "rw_ios_per_sec": 0, 00:08:18.231 "rw_mbytes_per_sec": 0, 00:08:18.231 "r_mbytes_per_sec": 0, 00:08:18.231 "w_mbytes_per_sec": 0 00:08:18.231 }, 00:08:18.231 "claimed": true, 00:08:18.231 "claim_type": "exclusive_write", 00:08:18.231 "zoned": false, 00:08:18.231 "supported_io_types": { 00:08:18.231 "read": true, 00:08:18.231 "write": true, 00:08:18.231 "unmap": true, 00:08:18.231 "write_zeroes": true, 00:08:18.231 "flush": true, 00:08:18.231 "reset": true, 00:08:18.231 "compare": false, 00:08:18.231 "compare_and_write": false, 00:08:18.231 "abort": true, 00:08:18.231 "nvme_admin": false, 00:08:18.231 "nvme_io": false 00:08:18.231 }, 00:08:18.231 "memory_domains": [ 00:08:18.231 { 00:08:18.231 "dma_device_id": "system", 00:08:18.231 "dma_device_type": 1 00:08:18.231 }, 00:08:18.231 { 00:08:18.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.231 "dma_device_type": 2 00:08:18.231 } 00:08:18.231 ], 00:08:18.231 "driver_specific": {} 00:08:18.231 } 00:08:18.231 ]' 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:18.231 12:51:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.231 12:51:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:19.164 12:51:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.164 12:51:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:19.164 12:51:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.164 12:51:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:19.164 12:51:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:21.698 12:51:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.636 ************************************ 00:08:22.636 START TEST filesystem_in_capsule_ext4 00:08:22.636 ************************************ 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:22.636 mke2fs 1.46.5 (30-Dec-2021) 00:08:22.636 Discarding device blocks: 0/522240 done 00:08:22.636 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:22.636 Filesystem UUID: f8c98e43-82b1-4442-be9a-b68ef12ec1a6 00:08:22.636 Superblock backups stored on blocks: 00:08:22.636 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:22.636 00:08:22.636 Allocating group tables: 0/64 done 00:08:22.636 Writing inode tables: 0/64 done 00:08:22.636 Creating journal (8192 blocks): done 00:08:22.636 Writing superblocks and filesystem accounting information: 0/64 done 00:08:22.636 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3505746 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.636 00:08:22.636 real 0m0.203s 00:08:22.636 user 0m0.021s 00:08:22.636 sys 0m0.086s 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.636 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:22.636 ************************************ 00:08:22.636 END TEST filesystem_in_capsule_ext4 00:08:22.636 ************************************ 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 ************************************ 00:08:22.895 START TEST filesystem_in_capsule_btrfs 00:08:22.895 ************************************ 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:22.895 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:22.895 btrfs-progs v6.6.2 00:08:22.895 See https://btrfs.readthedocs.io for more information. 00:08:22.895 00:08:22.895 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:22.895 NOTE: several default settings have changed in version 5.15, please make sure 00:08:22.895 this does not affect your deployments: 00:08:22.895 - DUP for metadata (-m dup) 00:08:22.895 - enabled no-holes (-O no-holes) 00:08:22.895 - enabled free-space-tree (-R free-space-tree) 00:08:22.895 00:08:22.895 Label: (null) 00:08:22.895 UUID: 3e44abb6-4807-4c35-ad21-bfb5d61057f6 00:08:22.895 Node size: 16384 00:08:22.895 Sector size: 4096 00:08:22.895 Filesystem size: 510.00MiB 00:08:22.895 Block group profiles: 00:08:22.895 Data: single 8.00MiB 00:08:22.895 Metadata: DUP 32.00MiB 00:08:22.895 System: DUP 8.00MiB 00:08:22.895 SSD detected: yes 00:08:22.895 Zoned device: no 00:08:22.895 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:22.895 Runtime features: free-space-tree 00:08:22.896 Checksum: crc32c 00:08:22.896 Number of devices: 1 00:08:22.896 Devices: 00:08:22.896 ID SIZE PATH 00:08:22.896 1 510.00MiB /dev/nvme0n1p1 00:08:22.896 00:08:22.896 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:22.896 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3505746 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.154 00:08:23.154 real 0m0.275s 00:08:23.154 user 0m0.028s 00:08:23.154 sys 0m0.145s 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:23.154 ************************************ 00:08:23.154 END TEST filesystem_in_capsule_btrfs 00:08:23.154 ************************************ 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.154 ************************************ 00:08:23.154 START TEST filesystem_in_capsule_xfs 00:08:23.154 ************************************ 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:23.154 12:52:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:23.154 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:23.154 = sectsz=512 attr=2, projid32bit=1 00:08:23.154 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:23.154 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:23.154 data = bsize=4096 blocks=130560, imaxpct=25 00:08:23.154 = sunit=0 swidth=0 blks 00:08:23.154 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:23.154 log =internal log bsize=4096 blocks=16384, version=2 00:08:23.154 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:23.154 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:23.413 Discarding blocks...Done. 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3505746 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.413 00:08:23.413 real 0m0.194s 00:08:23.413 user 0m0.029s 00:08:23.413 sys 0m0.073s 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:23.413 ************************************ 00:08:23.413 END TEST filesystem_in_capsule_xfs 00:08:23.413 ************************************ 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:23.413 12:52:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.349 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3505746 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3505746 ']' 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3505746 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3505746 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3505746' 00:08:24.609 killing process with pid 3505746 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3505746 00:08:24.609 [2024-05-15 12:52:02.289151] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:24.609 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3505746 00:08:24.609 [2024-05-15 12:52:02.372289] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.177 00:08:25.177 real 0m8.080s 00:08:25.177 user 0m31.222s 00:08:25.177 sys 0m1.351s 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.177 ************************************ 00:08:25.177 END TEST nvmf_filesystem_in_capsule 00:08:25.177 ************************************ 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:25.177 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:25.178 rmmod nvme_rdma 00:08:25.178 rmmod nvme_fabrics 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:25.178 00:08:25.178 real 0m22.567s 00:08:25.178 user 1m4.399s 00:08:25.178 sys 0m7.352s 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.178 12:52:02 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.178 ************************************ 00:08:25.178 END TEST nvmf_filesystem 00:08:25.178 ************************************ 00:08:25.178 12:52:02 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:25.178 12:52:02 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:25.178 12:52:02 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.178 12:52:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:25.178 ************************************ 00:08:25.178 START TEST nvmf_target_discovery 00:08:25.178 ************************************ 00:08:25.178 12:52:02 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:25.461 * Looking for test storage... 00:08:25.461 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.461 12:52:03 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.031 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:32.032 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:32.032 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:32.032 Found net devices under 0000:18:00.0: mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:32.032 Found net devices under 0000:18:00.1: mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:32.032 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.032 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:32.032 altname enp24s0f0np0 00:08:32.032 altname ens785f0np0 00:08:32.032 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.032 valid_lft forever preferred_lft forever 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:32.032 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.032 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:32.032 altname enp24s0f1np1 00:08:32.032 altname ens785f1np1 00:08:32.032 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.032 valid_lft forever preferred_lft forever 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:32.032 192.168.100.9' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:32.032 192.168.100.9' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:32.032 192.168.100.9' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3509851 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3509851 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3509851 ']' 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.032 12:52:08 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.032 [2024-05-15 12:52:08.925548] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:08:32.032 [2024-05-15 12:52:08.925599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.032 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.032 [2024-05-15 12:52:08.997748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.032 [2024-05-15 12:52:09.086553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.032 [2024-05-15 12:52:09.086593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.032 [2024-05-15 12:52:09.086602] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.032 [2024-05-15 12:52:09.086610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.032 [2024-05-15 12:52:09.086618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.032 [2024-05-15 12:52:09.086661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.032 [2024-05-15 12:52:09.086750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.032 [2024-05-15 12:52:09.086825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.032 [2024-05-15 12:52:09.086827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.032 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.032 [2024-05-15 12:52:09.813269] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16360f0/0x163a5e0) succeed. 00:08:32.032 [2024-05-15 12:52:09.823766] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1637730/0x167bc70) succeed. 00:08:32.291 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.291 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:32.291 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.291 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:32.291 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 Null1 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 [2024-05-15 12:52:09.995732] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:32.292 [2024-05-15 12:52:09.996068] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.292 12:52:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 Null2 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 Null3 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 Null4 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.292 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:32.552 00:08:32.552 Discovery Log Number of Records 6, Generation counter 6 00:08:32.552 =====Discovery Log Entry 0====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: current discovery subsystem 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4420 00:08:32.552 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: explicit discovery connections, duplicate discovery information 00:08:32.552 rdma_prtype: not specified 00:08:32.552 rdma_qptype: connected 00:08:32.552 rdma_cms: rdma-cm 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 =====Discovery Log Entry 1====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: nvme subsystem 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4420 00:08:32.552 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: none 00:08:32.552 rdma_prtype: not specified 00:08:32.552 rdma_qptype: connected 00:08:32.552 rdma_cms: rdma-cm 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 =====Discovery Log Entry 2====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: nvme subsystem 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4420 00:08:32.552 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: none 00:08:32.552 rdma_prtype: not specified 00:08:32.552 rdma_qptype: connected 00:08:32.552 rdma_cms: rdma-cm 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 =====Discovery Log Entry 3====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: nvme subsystem 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4420 00:08:32.552 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: none 00:08:32.552 rdma_prtype: not specified 00:08:32.552 rdma_qptype: connected 00:08:32.552 rdma_cms: rdma-cm 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 =====Discovery Log Entry 4====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: nvme subsystem 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4420 00:08:32.552 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: none 00:08:32.552 rdma_prtype: not specified 00:08:32.552 rdma_qptype: connected 00:08:32.552 rdma_cms: rdma-cm 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 =====Discovery Log Entry 5====== 00:08:32.552 trtype: rdma 00:08:32.552 adrfam: ipv4 00:08:32.552 subtype: discovery subsystem referral 00:08:32.552 treq: not required 00:08:32.552 portid: 0 00:08:32.552 trsvcid: 4430 00:08:32.552 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.552 traddr: 192.168.100.8 00:08:32.552 eflags: none 00:08:32.552 rdma_prtype: unrecognized 00:08:32.552 rdma_qptype: unrecognized 00:08:32.552 rdma_cms: unrecognized 00:08:32.552 rdma_pkey: 0x0000 00:08:32.552 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:32.552 Perform nvmf subsystem discovery via RPC 00:08:32.552 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:32.552 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.552 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.552 [ 00:08:32.552 { 00:08:32.552 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:32.552 "subtype": "Discovery", 00:08:32.552 "listen_addresses": [ 00:08:32.552 { 00:08:32.552 "trtype": "RDMA", 00:08:32.552 "adrfam": "IPv4", 00:08:32.552 "traddr": "192.168.100.8", 00:08:32.552 "trsvcid": "4420" 00:08:32.552 } 00:08:32.552 ], 00:08:32.552 "allow_any_host": true, 00:08:32.552 "hosts": [] 00:08:32.552 }, 00:08:32.552 { 00:08:32.552 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.552 "subtype": "NVMe", 00:08:32.552 "listen_addresses": [ 00:08:32.552 { 00:08:32.552 "trtype": "RDMA", 00:08:32.552 "adrfam": "IPv4", 00:08:32.552 "traddr": "192.168.100.8", 00:08:32.552 "trsvcid": "4420" 00:08:32.552 } 00:08:32.552 ], 00:08:32.552 "allow_any_host": true, 00:08:32.552 "hosts": [], 00:08:32.552 "serial_number": "SPDK00000000000001", 00:08:32.552 "model_number": "SPDK bdev Controller", 00:08:32.552 "max_namespaces": 32, 00:08:32.552 "min_cntlid": 1, 00:08:32.552 "max_cntlid": 65519, 00:08:32.552 "namespaces": [ 00:08:32.552 { 00:08:32.552 "nsid": 1, 00:08:32.552 "bdev_name": "Null1", 00:08:32.552 "name": "Null1", 00:08:32.552 "nguid": "ED1E9AF45A1E45BE9FB167B5868C5186", 00:08:32.552 "uuid": "ed1e9af4-5a1e-45be-9fb1-67b5868c5186" 00:08:32.552 } 00:08:32.552 ] 00:08:32.552 }, 00:08:32.552 { 00:08:32.552 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:32.552 "subtype": "NVMe", 00:08:32.552 "listen_addresses": [ 00:08:32.552 { 00:08:32.552 "trtype": "RDMA", 00:08:32.552 "adrfam": "IPv4", 00:08:32.552 "traddr": "192.168.100.8", 00:08:32.552 "trsvcid": "4420" 00:08:32.552 } 00:08:32.552 ], 00:08:32.552 "allow_any_host": true, 00:08:32.552 "hosts": [], 00:08:32.552 "serial_number": "SPDK00000000000002", 00:08:32.552 "model_number": "SPDK bdev Controller", 00:08:32.552 "max_namespaces": 32, 00:08:32.552 "min_cntlid": 1, 00:08:32.552 "max_cntlid": 65519, 00:08:32.552 "namespaces": [ 00:08:32.552 { 00:08:32.552 "nsid": 1, 00:08:32.552 "bdev_name": "Null2", 00:08:32.552 "name": "Null2", 00:08:32.552 "nguid": "5794E726521F48B7A381D3DCF1503D16", 00:08:32.552 "uuid": "5794e726-521f-48b7-a381-d3dcf1503d16" 00:08:32.552 } 00:08:32.553 ] 00:08:32.553 }, 00:08:32.553 { 00:08:32.553 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:32.553 "subtype": "NVMe", 00:08:32.553 "listen_addresses": [ 00:08:32.553 { 00:08:32.553 "trtype": "RDMA", 00:08:32.553 "adrfam": "IPv4", 00:08:32.553 "traddr": "192.168.100.8", 00:08:32.553 "trsvcid": "4420" 00:08:32.553 } 00:08:32.553 ], 00:08:32.553 "allow_any_host": true, 00:08:32.553 "hosts": [], 00:08:32.553 "serial_number": "SPDK00000000000003", 00:08:32.553 "model_number": "SPDK bdev Controller", 00:08:32.553 "max_namespaces": 32, 00:08:32.553 "min_cntlid": 1, 00:08:32.553 "max_cntlid": 65519, 00:08:32.553 "namespaces": [ 00:08:32.553 { 00:08:32.553 "nsid": 1, 00:08:32.553 "bdev_name": "Null3", 00:08:32.553 "name": "Null3", 00:08:32.553 "nguid": "278EFAB086394564AC827356A797CEB3", 00:08:32.553 "uuid": "278efab0-8639-4564-ac82-7356a797ceb3" 00:08:32.553 } 00:08:32.553 ] 00:08:32.553 }, 00:08:32.553 { 00:08:32.553 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:32.553 "subtype": "NVMe", 00:08:32.553 "listen_addresses": [ 00:08:32.553 { 00:08:32.553 "trtype": "RDMA", 00:08:32.553 "adrfam": "IPv4", 00:08:32.553 "traddr": "192.168.100.8", 00:08:32.553 "trsvcid": "4420" 00:08:32.553 } 00:08:32.553 ], 00:08:32.553 "allow_any_host": true, 00:08:32.553 "hosts": [], 00:08:32.553 "serial_number": "SPDK00000000000004", 00:08:32.553 "model_number": "SPDK bdev Controller", 00:08:32.553 "max_namespaces": 32, 00:08:32.553 "min_cntlid": 1, 00:08:32.553 "max_cntlid": 65519, 00:08:32.553 "namespaces": [ 00:08:32.553 { 00:08:32.553 "nsid": 1, 00:08:32.553 "bdev_name": "Null4", 00:08:32.553 "name": "Null4", 00:08:32.553 "nguid": "02D27D6CDECA404EA48E303262F04573", 00:08:32.553 "uuid": "02d27d6c-deca-404e-a48e-303262f04573" 00:08:32.553 } 00:08:32.553 ] 00:08:32.553 } 00:08:32.553 ] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:32.553 rmmod nvme_rdma 00:08:32.553 rmmod nvme_fabrics 00:08:32.553 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3509851 ']' 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3509851 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3509851 ']' 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3509851 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3509851 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3509851' 00:08:32.813 killing process with pid 3509851 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3509851 00:08:32.813 [2024-05-15 12:52:10.486700] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:32.813 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3509851 00:08:32.813 [2024-05-15 12:52:10.575190] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:33.073 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.073 12:52:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:33.073 00:08:33.073 real 0m7.847s 00:08:33.073 user 0m8.430s 00:08:33.073 sys 0m4.846s 00:08:33.073 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.073 12:52:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 ************************************ 00:08:33.073 END TEST nvmf_target_discovery 00:08:33.073 ************************************ 00:08:33.073 12:52:10 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:33.073 12:52:10 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.073 12:52:10 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.073 12:52:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 ************************************ 00:08:33.073 START TEST nvmf_referrals 00:08:33.073 ************************************ 00:08:33.073 12:52:10 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:33.333 * Looking for test storage... 00:08:33.333 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:33.333 12:52:10 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.333 12:52:10 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.333 12:52:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:38.608 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:38.608 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:38.608 Found net devices under 0000:18:00.0: mlx_0_0 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:38.608 Found net devices under 0000:18:00.1: mlx_0_1 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.608 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:38.609 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.609 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:38.609 altname enp24s0f0np0 00:08:38.609 altname ens785f0np0 00:08:38.609 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.609 valid_lft forever preferred_lft forever 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:38.609 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.609 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:38.609 altname enp24s0f1np1 00:08:38.609 altname ens785f1np1 00:08:38.609 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.609 valid_lft forever preferred_lft forever 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.609 192.168.100.9' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:38.609 192.168.100.9' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:38.609 192.168.100.9' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3512860 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3512860 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3512860 ']' 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.609 12:52:16 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.609 [2024-05-15 12:52:16.454964] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:08:38.609 [2024-05-15 12:52:16.455033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.609 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.869 [2024-05-15 12:52:16.527576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.869 [2024-05-15 12:52:16.614247] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.869 [2024-05-15 12:52:16.614291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.869 [2024-05-15 12:52:16.614301] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.869 [2024-05-15 12:52:16.614309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.869 [2024-05-15 12:52:16.614317] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.869 [2024-05-15 12:52:16.614366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.869 [2024-05-15 12:52:16.614387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.869 [2024-05-15 12:52:16.614463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.869 [2024-05-15 12:52:16.614465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.437 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:39.437 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:39.437 12:52:17 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.437 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.437 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 [2024-05-15 12:52:17.353837] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9fb0f0/0x9ff5e0) succeed. 00:08:39.696 [2024-05-15 12:52:17.364410] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9fc730/0xa40c70) succeed. 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 [2024-05-15 12:52:17.497028] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:39.696 [2024-05-15 12:52:17.497433] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.696 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.955 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.214 12:52:17 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.214 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:40.474 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.733 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.992 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.992 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.993 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:40.993 rmmod nvme_rdma 00:08:40.993 rmmod nvme_fabrics 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3512860 ']' 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3512860 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3512860 ']' 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3512860 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3512860 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3512860' 00:08:41.252 killing process with pid 3512860 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3512860 00:08:41.252 [2024-05-15 12:52:18.941119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:41.252 12:52:18 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3512860 00:08:41.252 [2024-05-15 12:52:19.025041] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:41.511 12:52:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.511 12:52:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:41.511 00:08:41.511 real 0m8.356s 00:08:41.511 user 0m12.539s 00:08:41.511 sys 0m4.985s 00:08:41.511 12:52:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.511 12:52:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.511 ************************************ 00:08:41.511 END TEST nvmf_referrals 00:08:41.511 ************************************ 00:08:41.511 12:52:19 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:41.511 12:52:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:41.511 12:52:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.511 12:52:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:41.511 ************************************ 00:08:41.511 START TEST nvmf_connect_disconnect 00:08:41.511 ************************************ 00:08:41.511 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:41.771 * Looking for test storage... 00:08:41.771 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.771 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.772 12:52:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:48.435 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:48.435 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:48.435 Found net devices under 0000:18:00.0: mlx_0_0 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:48.435 Found net devices under 0000:18:00.1: mlx_0_1 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:48.435 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:48.436 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.436 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:48.436 altname enp24s0f0np0 00:08:48.436 altname ens785f0np0 00:08:48.436 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.436 valid_lft forever preferred_lft forever 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:48.436 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.436 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:48.436 altname enp24s0f1np1 00:08:48.436 altname ens785f1np1 00:08:48.436 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.436 valid_lft forever preferred_lft forever 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.436 192.168.100.9' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:48.436 192.168.100.9' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:48.436 192.168.100.9' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3516095 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3516095 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3516095 ']' 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 12:52:25 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.436 [2024-05-15 12:52:25.414768] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:08:48.436 [2024-05-15 12:52:25.414824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.436 [2024-05-15 12:52:25.487687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.436 [2024-05-15 12:52:25.579417] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.437 [2024-05-15 12:52:25.579461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.437 [2024-05-15 12:52:25.579471] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.437 [2024-05-15 12:52:25.579480] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.437 [2024-05-15 12:52:25.579488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.437 [2024-05-15 12:52:25.579544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.437 [2024-05-15 12:52:25.579628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.437 [2024-05-15 12:52:25.579652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.437 [2024-05-15 12:52:25.579651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.437 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.437 [2024-05-15 12:52:26.284217] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:48.437 [2024-05-15 12:52:26.305979] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb80f0/0x1cbc5e0) succeed. 00:08:48.696 [2024-05-15 12:52:26.316423] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cb9730/0x1cfdc70) succeed. 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 [2024-05-15 12:52:26.463428] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:48.696 [2024-05-15 12:52:26.463819] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:48.696 12:52:26 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:52.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:08.763 rmmod nvme_rdma 00:09:08.763 rmmod nvme_fabrics 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3516095 ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3516095 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3516095 ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3516095 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3516095 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3516095' 00:09:08.763 killing process with pid 3516095 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3516095 00:09:08.763 [2024-05-15 12:52:46.447136] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:08.763 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3516095 00:09:08.763 [2024-05-15 12:52:46.498232] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:09.022 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.022 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:09.022 00:09:09.022 real 0m27.408s 00:09:09.022 user 1m25.638s 00:09:09.022 sys 0m5.600s 00:09:09.022 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.022 12:52:46 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 ************************************ 00:09:09.022 END TEST nvmf_connect_disconnect 00:09:09.022 ************************************ 00:09:09.022 12:52:46 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:09.022 12:52:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:09.022 12:52:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.022 12:52:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:09.022 ************************************ 00:09:09.022 START TEST nvmf_multitarget 00:09:09.022 ************************************ 00:09:09.022 12:52:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:09.281 * Looking for test storage... 00:09:09.281 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.281 12:52:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.852 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:15.853 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:15.853 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:15.853 Found net devices under 0000:18:00.0: mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:15.853 Found net devices under 0000:18:00.1: mlx_0_1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:15.853 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.853 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:15.853 altname enp24s0f0np0 00:09:15.853 altname ens785f0np0 00:09:15.853 inet 192.168.100.8/24 scope global mlx_0_0 00:09:15.853 valid_lft forever preferred_lft forever 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:15.853 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.853 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:15.853 altname enp24s0f1np1 00:09:15.853 altname ens785f1np1 00:09:15.853 inet 192.168.100.9/24 scope global mlx_0_1 00:09:15.853 valid_lft forever preferred_lft forever 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:15.853 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:15.854 192.168.100.9' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:15.854 192.168.100.9' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:15.854 192.168.100.9' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3521794 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3521794 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3521794 ']' 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:15.854 12:52:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:15.854 [2024-05-15 12:52:52.952843] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:09:15.854 [2024-05-15 12:52:52.952903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.854 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.854 [2024-05-15 12:52:53.026645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.854 [2024-05-15 12:52:53.117692] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.854 [2024-05-15 12:52:53.117735] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.854 [2024-05-15 12:52:53.117745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.854 [2024-05-15 12:52:53.117754] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.854 [2024-05-15 12:52:53.117760] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.854 [2024-05-15 12:52:53.117852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.854 [2024-05-15 12:52:53.117951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.854 [2024-05-15 12:52:53.118030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.854 [2024-05-15 12:52:53.118032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:16.113 12:52:53 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:16.372 "nvmf_tgt_1" 00:09:16.372 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:16.372 "nvmf_tgt_2" 00:09:16.372 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:16.372 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:16.372 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:16.372 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:16.630 true 00:09:16.630 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:16.630 true 00:09:16.630 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:16.630 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:16.889 rmmod nvme_rdma 00:09:16.889 rmmod nvme_fabrics 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3521794 ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3521794 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3521794 ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3521794 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3521794 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3521794' 00:09:16.889 killing process with pid 3521794 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3521794 00:09:16.889 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3521794 00:09:17.148 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.148 12:52:54 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:17.148 00:09:17.148 real 0m8.055s 00:09:17.148 user 0m9.473s 00:09:17.148 sys 0m5.026s 00:09:17.148 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.148 12:52:54 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:17.148 ************************************ 00:09:17.148 END TEST nvmf_multitarget 00:09:17.148 ************************************ 00:09:17.148 12:52:54 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:17.148 12:52:54 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:17.148 12:52:54 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.148 12:52:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:17.148 ************************************ 00:09:17.148 START TEST nvmf_rpc 00:09:17.148 ************************************ 00:09:17.148 12:52:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:17.406 * Looking for test storage... 00:09:17.406 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.406 12:52:55 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.407 12:52:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.681 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:22.682 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:22.682 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:22.682 Found net devices under 0000:18:00.0: mlx_0_0 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:22.682 Found net devices under 0000:18:00.1: mlx_0_1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:22.682 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.682 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:22.682 altname enp24s0f0np0 00:09:22.682 altname ens785f0np0 00:09:22.682 inet 192.168.100.8/24 scope global mlx_0_0 00:09:22.682 valid_lft forever preferred_lft forever 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:22.682 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:22.683 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.683 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:22.683 altname enp24s0f1np1 00:09:22.683 altname ens785f1np1 00:09:22.683 inet 192.168.100.9/24 scope global mlx_0_1 00:09:22.683 valid_lft forever preferred_lft forever 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:22.683 192.168.100.9' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:22.683 192.168.100.9' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:22.683 192.168.100.9' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3524868 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3524868 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3524868 ']' 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:22.683 12:53:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.943 [2024-05-15 12:53:00.562270] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:09:22.943 [2024-05-15 12:53:00.562334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.943 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.943 [2024-05-15 12:53:00.635527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.943 [2024-05-15 12:53:00.725131] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.943 [2024-05-15 12:53:00.725173] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.943 [2024-05-15 12:53:00.725183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.943 [2024-05-15 12:53:00.725192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.943 [2024-05-15 12:53:00.725199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.943 [2024-05-15 12:53:00.725250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.943 [2024-05-15 12:53:00.725339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.943 [2024-05-15 12:53:00.725416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.943 [2024-05-15 12:53:00.725418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.511 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:23.511 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:23.511 12:53:01 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.511 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.511 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.770 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:23.770 "tick_rate": 2300000000, 00:09:23.770 "poll_groups": [ 00:09:23.770 { 00:09:23.770 "name": "nvmf_tgt_poll_group_000", 00:09:23.770 "admin_qpairs": 0, 00:09:23.770 "io_qpairs": 0, 00:09:23.770 "current_admin_qpairs": 0, 00:09:23.770 "current_io_qpairs": 0, 00:09:23.770 "pending_bdev_io": 0, 00:09:23.770 "completed_nvme_io": 0, 00:09:23.770 "transports": [] 00:09:23.770 }, 00:09:23.770 { 00:09:23.770 "name": "nvmf_tgt_poll_group_001", 00:09:23.770 "admin_qpairs": 0, 00:09:23.770 "io_qpairs": 0, 00:09:23.770 "current_admin_qpairs": 0, 00:09:23.770 "current_io_qpairs": 0, 00:09:23.770 "pending_bdev_io": 0, 00:09:23.770 "completed_nvme_io": 0, 00:09:23.770 "transports": [] 00:09:23.770 }, 00:09:23.770 { 00:09:23.770 "name": "nvmf_tgt_poll_group_002", 00:09:23.770 "admin_qpairs": 0, 00:09:23.770 "io_qpairs": 0, 00:09:23.771 "current_admin_qpairs": 0, 00:09:23.771 "current_io_qpairs": 0, 00:09:23.771 "pending_bdev_io": 0, 00:09:23.771 "completed_nvme_io": 0, 00:09:23.771 "transports": [] 00:09:23.771 }, 00:09:23.771 { 00:09:23.771 "name": "nvmf_tgt_poll_group_003", 00:09:23.771 "admin_qpairs": 0, 00:09:23.771 "io_qpairs": 0, 00:09:23.771 "current_admin_qpairs": 0, 00:09:23.771 "current_io_qpairs": 0, 00:09:23.771 "pending_bdev_io": 0, 00:09:23.771 "completed_nvme_io": 0, 00:09:23.771 "transports": [] 00:09:23.771 } 00:09:23.771 ] 00:09:23.771 }' 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.771 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 [2024-05-15 12:53:01.570190] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb8a100/0xb8e5f0) succeed. 00:09:23.771 [2024-05-15 12:53:01.580754] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb8b740/0xbcfc80) succeed. 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.030 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:24.030 "tick_rate": 2300000000, 00:09:24.030 "poll_groups": [ 00:09:24.030 { 00:09:24.030 "name": "nvmf_tgt_poll_group_000", 00:09:24.030 "admin_qpairs": 0, 00:09:24.030 "io_qpairs": 0, 00:09:24.030 "current_admin_qpairs": 0, 00:09:24.030 "current_io_qpairs": 0, 00:09:24.030 "pending_bdev_io": 0, 00:09:24.030 "completed_nvme_io": 0, 00:09:24.030 "transports": [ 00:09:24.030 { 00:09:24.030 "trtype": "RDMA", 00:09:24.030 "pending_data_buffer": 0, 00:09:24.030 "devices": [ 00:09:24.030 { 00:09:24.030 "name": "mlx5_0", 00:09:24.030 "polls": 15956, 00:09:24.030 "idle_polls": 15956, 00:09:24.030 "completions": 0, 00:09:24.030 "requests": 0, 00:09:24.030 "request_latency": 0, 00:09:24.030 "pending_free_request": 0, 00:09:24.030 "pending_rdma_read": 0, 00:09:24.030 "pending_rdma_write": 0, 00:09:24.030 "pending_rdma_send": 0, 00:09:24.030 "total_send_wrs": 0, 00:09:24.030 "send_doorbell_updates": 0, 00:09:24.030 "total_recv_wrs": 4096, 00:09:24.030 "recv_doorbell_updates": 1 00:09:24.030 }, 00:09:24.030 { 00:09:24.030 "name": "mlx5_1", 00:09:24.030 "polls": 15956, 00:09:24.030 "idle_polls": 15956, 00:09:24.030 "completions": 0, 00:09:24.030 "requests": 0, 00:09:24.030 "request_latency": 0, 00:09:24.030 "pending_free_request": 0, 00:09:24.030 "pending_rdma_read": 0, 00:09:24.030 "pending_rdma_write": 0, 00:09:24.030 "pending_rdma_send": 0, 00:09:24.030 "total_send_wrs": 0, 00:09:24.030 "send_doorbell_updates": 0, 00:09:24.030 "total_recv_wrs": 4096, 00:09:24.030 "recv_doorbell_updates": 1 00:09:24.030 } 00:09:24.030 ] 00:09:24.030 } 00:09:24.030 ] 00:09:24.030 }, 00:09:24.030 { 00:09:24.030 "name": "nvmf_tgt_poll_group_001", 00:09:24.030 "admin_qpairs": 0, 00:09:24.030 "io_qpairs": 0, 00:09:24.030 "current_admin_qpairs": 0, 00:09:24.030 "current_io_qpairs": 0, 00:09:24.030 "pending_bdev_io": 0, 00:09:24.030 "completed_nvme_io": 0, 00:09:24.030 "transports": [ 00:09:24.030 { 00:09:24.030 "trtype": "RDMA", 00:09:24.030 "pending_data_buffer": 0, 00:09:24.030 "devices": [ 00:09:24.030 { 00:09:24.030 "name": "mlx5_0", 00:09:24.030 "polls": 10133, 00:09:24.030 "idle_polls": 10133, 00:09:24.030 "completions": 0, 00:09:24.030 "requests": 0, 00:09:24.030 "request_latency": 0, 00:09:24.030 "pending_free_request": 0, 00:09:24.030 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 }, 00:09:24.031 { 00:09:24.031 "name": "mlx5_1", 00:09:24.031 "polls": 10133, 00:09:24.031 "idle_polls": 10133, 00:09:24.031 "completions": 0, 00:09:24.031 "requests": 0, 00:09:24.031 "request_latency": 0, 00:09:24.031 "pending_free_request": 0, 00:09:24.031 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 }, 00:09:24.031 { 00:09:24.031 "name": "nvmf_tgt_poll_group_002", 00:09:24.031 "admin_qpairs": 0, 00:09:24.031 "io_qpairs": 0, 00:09:24.031 "current_admin_qpairs": 0, 00:09:24.031 "current_io_qpairs": 0, 00:09:24.031 "pending_bdev_io": 0, 00:09:24.031 "completed_nvme_io": 0, 00:09:24.031 "transports": [ 00:09:24.031 { 00:09:24.031 "trtype": "RDMA", 00:09:24.031 "pending_data_buffer": 0, 00:09:24.031 "devices": [ 00:09:24.031 { 00:09:24.031 "name": "mlx5_0", 00:09:24.031 "polls": 5577, 00:09:24.031 "idle_polls": 5577, 00:09:24.031 "completions": 0, 00:09:24.031 "requests": 0, 00:09:24.031 "request_latency": 0, 00:09:24.031 "pending_free_request": 0, 00:09:24.031 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 }, 00:09:24.031 { 00:09:24.031 "name": "mlx5_1", 00:09:24.031 "polls": 5577, 00:09:24.031 "idle_polls": 5577, 00:09:24.031 "completions": 0, 00:09:24.031 "requests": 0, 00:09:24.031 "request_latency": 0, 00:09:24.031 "pending_free_request": 0, 00:09:24.031 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 }, 00:09:24.031 { 00:09:24.031 "name": "nvmf_tgt_poll_group_003", 00:09:24.031 "admin_qpairs": 0, 00:09:24.031 "io_qpairs": 0, 00:09:24.031 "current_admin_qpairs": 0, 00:09:24.031 "current_io_qpairs": 0, 00:09:24.031 "pending_bdev_io": 0, 00:09:24.031 "completed_nvme_io": 0, 00:09:24.031 "transports": [ 00:09:24.031 { 00:09:24.031 "trtype": "RDMA", 00:09:24.031 "pending_data_buffer": 0, 00:09:24.031 "devices": [ 00:09:24.031 { 00:09:24.031 "name": "mlx5_0", 00:09:24.031 "polls": 871, 00:09:24.031 "idle_polls": 871, 00:09:24.031 "completions": 0, 00:09:24.031 "requests": 0, 00:09:24.031 "request_latency": 0, 00:09:24.031 "pending_free_request": 0, 00:09:24.031 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 }, 00:09:24.031 { 00:09:24.031 "name": "mlx5_1", 00:09:24.031 "polls": 871, 00:09:24.031 "idle_polls": 871, 00:09:24.031 "completions": 0, 00:09:24.031 "requests": 0, 00:09:24.031 "request_latency": 0, 00:09:24.031 "pending_free_request": 0, 00:09:24.031 "pending_rdma_read": 0, 00:09:24.031 "pending_rdma_write": 0, 00:09:24.031 "pending_rdma_send": 0, 00:09:24.031 "total_send_wrs": 0, 00:09:24.031 "send_doorbell_updates": 0, 00:09:24.031 "total_recv_wrs": 4096, 00:09:24.031 "recv_doorbell_updates": 1 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 } 00:09:24.031 ] 00:09:24.031 }' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:24.031 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.291 12:53:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.291 Malloc1 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 [2024-05-15 12:53:02.041141] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:24.292 [2024-05-15 12:53:02.041556] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:24.292 [2024-05-15 12:53:02.087428] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:09:24.292 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:24.292 could not add new controller: failed to write to nvme-fabrics device 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.292 12:53:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.229 12:53:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.229 12:53:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:25.229 12:53:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.229 12:53:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:25.229 12:53:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:27.231 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:27.491 12:53:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.427 12:53:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.427 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:28.428 [2024-05-15 12:53:06.179123] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:09:28.428 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:28.428 could not add new controller: failed to write to nvme-fabrics device 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.428 12:53:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:29.365 12:53:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.365 12:53:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:29.365 12:53:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.365 12:53:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:29.365 12:53:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:31.900 12:53:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 [2024-05-15 12:53:10.261135] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.468 12:53:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:33.405 12:53:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.405 12:53:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:33.405 12:53:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.405 12:53:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:33.405 12:53:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:35.940 12:53:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 [2024-05-15 12:53:14.273237] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.507 12:53:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:37.442 12:53:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.442 12:53:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:37.442 12:53:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.442 12:53:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:37.442 12:53:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:39.978 12:53:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 [2024-05-15 12:53:18.307190] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.544 12:53:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:41.662 12:53:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.662 12:53:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:41.662 12:53:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.662 12:53:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:41.662 12:53:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:43.575 12:53:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 [2024-05-15 12:53:22.348318] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.511 12:53:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:45.887 12:53:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.887 12:53:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:45.887 12:53:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.887 12:53:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:45.887 12:53:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:47.789 12:53:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.726 [2024-05-15 12:53:26.373514] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.726 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.727 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.727 12:53:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.727 12:53:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:49.665 12:53:27 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.665 12:53:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:49.665 12:53:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.665 12:53:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:49.665 12:53:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:51.572 12:53:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.510 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.770 [2024-05-15 12:53:30.411627] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:52.770 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 [2024-05-15 12:53:30.459931] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 [2024-05-15 12:53:30.512159] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 [2024-05-15 12:53:30.560301] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 [2024-05-15 12:53:30.608497] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.771 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:53.031 "tick_rate": 2300000000, 00:09:53.031 "poll_groups": [ 00:09:53.031 { 00:09:53.031 "name": "nvmf_tgt_poll_group_000", 00:09:53.031 "admin_qpairs": 2, 00:09:53.031 "io_qpairs": 27, 00:09:53.031 "current_admin_qpairs": 0, 00:09:53.031 "current_io_qpairs": 0, 00:09:53.031 "pending_bdev_io": 0, 00:09:53.031 "completed_nvme_io": 128, 00:09:53.031 "transports": [ 00:09:53.031 { 00:09:53.031 "trtype": "RDMA", 00:09:53.031 "pending_data_buffer": 0, 00:09:53.031 "devices": [ 00:09:53.031 { 00:09:53.031 "name": "mlx5_0", 00:09:53.031 "polls": 3430778, 00:09:53.031 "idle_polls": 3430450, 00:09:53.031 "completions": 369, 00:09:53.031 "requests": 184, 00:09:53.031 "request_latency": 33208808, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 311, 00:09:53.031 "send_doorbell_updates": 165, 00:09:53.031 "total_recv_wrs": 4280, 00:09:53.031 "recv_doorbell_updates": 165 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "mlx5_1", 00:09:53.031 "polls": 3430778, 00:09:53.031 "idle_polls": 3430778, 00:09:53.031 "completions": 0, 00:09:53.031 "requests": 0, 00:09:53.031 "request_latency": 0, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 0, 00:09:53.031 "send_doorbell_updates": 0, 00:09:53.031 "total_recv_wrs": 4096, 00:09:53.031 "recv_doorbell_updates": 1 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "nvmf_tgt_poll_group_001", 00:09:53.031 "admin_qpairs": 2, 00:09:53.031 "io_qpairs": 26, 00:09:53.031 "current_admin_qpairs": 0, 00:09:53.031 "current_io_qpairs": 0, 00:09:53.031 "pending_bdev_io": 0, 00:09:53.031 "completed_nvme_io": 124, 00:09:53.031 "transports": [ 00:09:53.031 { 00:09:53.031 "trtype": "RDMA", 00:09:53.031 "pending_data_buffer": 0, 00:09:53.031 "devices": [ 00:09:53.031 { 00:09:53.031 "name": "mlx5_0", 00:09:53.031 "polls": 3436816, 00:09:53.031 "idle_polls": 3436499, 00:09:53.031 "completions": 356, 00:09:53.031 "requests": 178, 00:09:53.031 "request_latency": 33768490, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 301, 00:09:53.031 "send_doorbell_updates": 156, 00:09:53.031 "total_recv_wrs": 4274, 00:09:53.031 "recv_doorbell_updates": 157 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "mlx5_1", 00:09:53.031 "polls": 3436816, 00:09:53.031 "idle_polls": 3436816, 00:09:53.031 "completions": 0, 00:09:53.031 "requests": 0, 00:09:53.031 "request_latency": 0, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 0, 00:09:53.031 "send_doorbell_updates": 0, 00:09:53.031 "total_recv_wrs": 4096, 00:09:53.031 "recv_doorbell_updates": 1 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "nvmf_tgt_poll_group_002", 00:09:53.031 "admin_qpairs": 1, 00:09:53.031 "io_qpairs": 26, 00:09:53.031 "current_admin_qpairs": 0, 00:09:53.031 "current_io_qpairs": 0, 00:09:53.031 "pending_bdev_io": 0, 00:09:53.031 "completed_nvme_io": 78, 00:09:53.031 "transports": [ 00:09:53.031 { 00:09:53.031 "trtype": "RDMA", 00:09:53.031 "pending_data_buffer": 0, 00:09:53.031 "devices": [ 00:09:53.031 { 00:09:53.031 "name": "mlx5_0", 00:09:53.031 "polls": 3447088, 00:09:53.031 "idle_polls": 3446897, 00:09:53.031 "completions": 213, 00:09:53.031 "requests": 106, 00:09:53.031 "request_latency": 20803090, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 171, 00:09:53.031 "send_doorbell_updates": 95, 00:09:53.031 "total_recv_wrs": 4202, 00:09:53.031 "recv_doorbell_updates": 95 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "mlx5_1", 00:09:53.031 "polls": 3447088, 00:09:53.031 "idle_polls": 3447088, 00:09:53.031 "completions": 0, 00:09:53.031 "requests": 0, 00:09:53.031 "request_latency": 0, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 0, 00:09:53.031 "send_doorbell_updates": 0, 00:09:53.031 "total_recv_wrs": 4096, 00:09:53.031 "recv_doorbell_updates": 1 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "nvmf_tgt_poll_group_003", 00:09:53.031 "admin_qpairs": 2, 00:09:53.031 "io_qpairs": 26, 00:09:53.031 "current_admin_qpairs": 0, 00:09:53.031 "current_io_qpairs": 0, 00:09:53.031 "pending_bdev_io": 0, 00:09:53.031 "completed_nvme_io": 125, 00:09:53.031 "transports": [ 00:09:53.031 { 00:09:53.031 "trtype": "RDMA", 00:09:53.031 "pending_data_buffer": 0, 00:09:53.031 "devices": [ 00:09:53.031 { 00:09:53.031 "name": "mlx5_0", 00:09:53.031 "polls": 2703224, 00:09:53.031 "idle_polls": 2702906, 00:09:53.031 "completions": 362, 00:09:53.031 "requests": 181, 00:09:53.031 "request_latency": 37099770, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 306, 00:09:53.031 "send_doorbell_updates": 156, 00:09:53.031 "total_recv_wrs": 4277, 00:09:53.031 "recv_doorbell_updates": 157 00:09:53.031 }, 00:09:53.031 { 00:09:53.031 "name": "mlx5_1", 00:09:53.031 "polls": 2703224, 00:09:53.031 "idle_polls": 2703224, 00:09:53.031 "completions": 0, 00:09:53.031 "requests": 0, 00:09:53.031 "request_latency": 0, 00:09:53.031 "pending_free_request": 0, 00:09:53.031 "pending_rdma_read": 0, 00:09:53.031 "pending_rdma_write": 0, 00:09:53.031 "pending_rdma_send": 0, 00:09:53.031 "total_send_wrs": 0, 00:09:53.031 "send_doorbell_updates": 0, 00:09:53.031 "total_recv_wrs": 4096, 00:09:53.031 "recv_doorbell_updates": 1 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 } 00:09:53.031 ] 00:09:53.031 }' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 124880158 > 0 )) 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:53.031 rmmod nvme_rdma 00:09:53.031 rmmod nvme_fabrics 00:09:53.031 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3524868 ']' 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3524868 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3524868 ']' 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3524868 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3524868 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3524868' 00:09:53.291 killing process with pid 3524868 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3524868 00:09:53.291 [2024-05-15 12:53:30.966385] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:53.291 12:53:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3524868 00:09:53.291 [2024-05-15 12:53:31.049445] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:53.550 12:53:31 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.550 12:53:31 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:53.550 00:09:53.550 real 0m36.324s 00:09:53.550 user 2m3.452s 00:09:53.550 sys 0m5.914s 00:09:53.550 12:53:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.550 12:53:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 ************************************ 00:09:53.550 END TEST nvmf_rpc 00:09:53.550 ************************************ 00:09:53.550 12:53:31 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:53.550 12:53:31 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:53.550 12:53:31 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.550 12:53:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 ************************************ 00:09:53.550 START TEST nvmf_invalid 00:09:53.550 ************************************ 00:09:53.550 12:53:31 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:53.810 * Looking for test storage... 00:09:53.810 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.810 12:53:31 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:00.436 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:00.436 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:00.436 Found net devices under 0000:18:00.0: mlx_0_0 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:00.436 Found net devices under 0000:18:00.1: mlx_0_1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:00.436 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:00.436 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:00.436 altname enp24s0f0np0 00:10:00.436 altname ens785f0np0 00:10:00.436 inet 192.168.100.8/24 scope global mlx_0_0 00:10:00.436 valid_lft forever preferred_lft forever 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.436 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:00.437 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:00.437 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:00.437 altname enp24s0f1np1 00:10:00.437 altname ens785f1np1 00:10:00.437 inet 192.168.100.9/24 scope global mlx_0_1 00:10:00.437 valid_lft forever preferred_lft forever 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:00.437 192.168.100.9' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:00.437 192.168.100.9' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:00.437 192.168.100.9' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3531834 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3531834 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3531834 ']' 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:00.437 12:53:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.437 [2024-05-15 12:53:37.418558] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:10:00.437 [2024-05-15 12:53:37.418613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.437 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.437 [2024-05-15 12:53:37.491562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.437 [2024-05-15 12:53:37.579623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.437 [2024-05-15 12:53:37.579668] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.437 [2024-05-15 12:53:37.579677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.437 [2024-05-15 12:53:37.579701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.437 [2024-05-15 12:53:37.579708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.437 [2024-05-15 12:53:37.579762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.437 [2024-05-15 12:53:37.579867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.437 [2024-05-15 12:53:37.579937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.437 [2024-05-15 12:53:37.579939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:00.437 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24671 00:10:00.696 [2024-05-15 12:53:38.484728] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:00.696 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:00.696 { 00:10:00.696 "nqn": "nqn.2016-06.io.spdk:cnode24671", 00:10:00.696 "tgt_name": "foobar", 00:10:00.696 "method": "nvmf_create_subsystem", 00:10:00.696 "req_id": 1 00:10:00.696 } 00:10:00.696 Got JSON-RPC error response 00:10:00.696 response: 00:10:00.696 { 00:10:00.696 "code": -32603, 00:10:00.696 "message": "Unable to find target foobar" 00:10:00.696 }' 00:10:00.696 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:00.696 { 00:10:00.696 "nqn": "nqn.2016-06.io.spdk:cnode24671", 00:10:00.696 "tgt_name": "foobar", 00:10:00.696 "method": "nvmf_create_subsystem", 00:10:00.696 "req_id": 1 00:10:00.696 } 00:10:00.696 Got JSON-RPC error response 00:10:00.696 response: 00:10:00.696 { 00:10:00.696 "code": -32603, 00:10:00.696 "message": "Unable to find target foobar" 00:10:00.696 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:00.696 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:00.696 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28704 00:10:00.954 [2024-05-15 12:53:38.681446] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28704: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:00.954 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:00.954 { 00:10:00.954 "nqn": "nqn.2016-06.io.spdk:cnode28704", 00:10:00.954 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:00.954 "method": "nvmf_create_subsystem", 00:10:00.954 "req_id": 1 00:10:00.954 } 00:10:00.954 Got JSON-RPC error response 00:10:00.954 response: 00:10:00.954 { 00:10:00.954 "code": -32602, 00:10:00.954 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:00.954 }' 00:10:00.954 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:00.954 { 00:10:00.954 "nqn": "nqn.2016-06.io.spdk:cnode28704", 00:10:00.954 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:00.954 "method": "nvmf_create_subsystem", 00:10:00.954 "req_id": 1 00:10:00.954 } 00:10:00.954 Got JSON-RPC error response 00:10:00.954 response: 00:10:00.954 { 00:10:00.954 "code": -32602, 00:10:00.954 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:00.954 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:00.954 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:00.954 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28897 00:10:01.213 [2024-05-15 12:53:38.861977] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28897: invalid model number 'SPDK_Controller' 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:01.213 { 00:10:01.213 "nqn": "nqn.2016-06.io.spdk:cnode28897", 00:10:01.213 "model_number": "SPDK_Controller\u001f", 00:10:01.213 "method": "nvmf_create_subsystem", 00:10:01.213 "req_id": 1 00:10:01.213 } 00:10:01.213 Got JSON-RPC error response 00:10:01.213 response: 00:10:01.213 { 00:10:01.213 "code": -32602, 00:10:01.213 "message": "Invalid MN SPDK_Controller\u001f" 00:10:01.213 }' 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:01.213 { 00:10:01.213 "nqn": "nqn.2016-06.io.spdk:cnode28897", 00:10:01.213 "model_number": "SPDK_Controller\u001f", 00:10:01.213 "method": "nvmf_create_subsystem", 00:10:01.213 "req_id": 1 00:10:01.213 } 00:10:01.213 Got JSON-RPC error response 00:10:01.213 response: 00:10:01.213 { 00:10:01.213 "code": -32602, 00:10:01.213 "message": "Invalid MN SPDK_Controller\u001f" 00:10:01.213 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:01.213 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OjCmj7w4[yx+$T_U%k4d$' 00:10:01.214 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OjCmj7w4[yx+$T_U%k4d$' nqn.2016-06.io.spdk:cnode9985 00:10:01.473 [2024-05-15 12:53:39.223216] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9985: invalid serial number 'OjCmj7w4[yx+$T_U%k4d$' 00:10:01.473 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:01.473 { 00:10:01.473 "nqn": "nqn.2016-06.io.spdk:cnode9985", 00:10:01.473 "serial_number": "OjCmj7w4[yx+$T_U%k4d$", 00:10:01.473 "method": "nvmf_create_subsystem", 00:10:01.473 "req_id": 1 00:10:01.473 } 00:10:01.473 Got JSON-RPC error response 00:10:01.473 response: 00:10:01.473 { 00:10:01.473 "code": -32602, 00:10:01.473 "message": "Invalid SN OjCmj7w4[yx+$T_U%k4d$" 00:10:01.473 }' 00:10:01.473 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:01.474 { 00:10:01.474 "nqn": "nqn.2016-06.io.spdk:cnode9985", 00:10:01.474 "serial_number": "OjCmj7w4[yx+$T_U%k4d$", 00:10:01.474 "method": "nvmf_create_subsystem", 00:10:01.474 "req_id": 1 00:10:01.474 } 00:10:01.474 Got JSON-RPC error response 00:10:01.474 response: 00:10:01.474 { 00:10:01.474 "code": -32602, 00:10:01.474 "message": "Invalid SN OjCmj7w4[yx+$T_U%k4d$" 00:10:01.474 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:01.474 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:01.475 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.475 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.475 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:01.734 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ny6"S,'\''ox}=M=Q4eHo#&XG~VC(:qw#x"[?[u>2#W4' 00:10:01.735 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ny6"S,'\''ox}=M=Q4eHo#&XG~VC(:qw#x"[?[u>2#W4' nqn.2016-06.io.spdk:cnode12872 00:10:01.994 [2024-05-15 12:53:39.740962] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12872: invalid model number 'ny6"S,'ox}=M=Q4eHo#&XG~VC(:qw#x"[?[u>2#W4' 00:10:01.994 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:01.994 { 00:10:01.994 "nqn": "nqn.2016-06.io.spdk:cnode12872", 00:10:01.994 "model_number": "ny6\"S,'\''ox}=M=Q4eHo#&XG~VC(:qw#x\"[?[u>2#W4", 00:10:01.994 "method": "nvmf_create_subsystem", 00:10:01.994 "req_id": 1 00:10:01.994 } 00:10:01.994 Got JSON-RPC error response 00:10:01.994 response: 00:10:01.994 { 00:10:01.994 "code": -32602, 00:10:01.994 "message": "Invalid MN ny6\"S,'\''ox}=M=Q4eHo#&XG~VC(:qw#x\"[?[u>2#W4" 00:10:01.994 }' 00:10:01.994 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:01.994 { 00:10:01.994 "nqn": "nqn.2016-06.io.spdk:cnode12872", 00:10:01.994 "model_number": "ny6\"S,'ox}=M=Q4eHo#&XG~VC(:qw#x\"[?[u>2#W4", 00:10:01.994 "method": "nvmf_create_subsystem", 00:10:01.994 "req_id": 1 00:10:01.994 } 00:10:01.994 Got JSON-RPC error response 00:10:01.994 response: 00:10:01.994 { 00:10:01.994 "code": -32602, 00:10:01.994 "message": "Invalid MN ny6\"S,'ox}=M=Q4eHo#&XG~VC(:qw#x\"[?[u>2#W4" 00:10:01.994 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:01.994 12:53:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:10:02.252 [2024-05-15 12:53:39.952123] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2142990/0x2146e80) succeed. 00:10:02.252 [2024-05-15 12:53:39.962683] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2143fd0/0x2188510) succeed. 00:10:02.252 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:02.512 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:10:02.512 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:10:02.512 192.168.100.9' 00:10:02.512 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:02.512 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:10:02.512 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:10:02.770 [2024-05-15 12:53:40.469051] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:02.770 [2024-05-15 12:53:40.469131] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:02.770 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:02.770 { 00:10:02.770 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:02.770 "listen_address": { 00:10:02.770 "trtype": "rdma", 00:10:02.770 "traddr": "192.168.100.8", 00:10:02.770 "trsvcid": "4421" 00:10:02.770 }, 00:10:02.770 "method": "nvmf_subsystem_remove_listener", 00:10:02.770 "req_id": 1 00:10:02.770 } 00:10:02.770 Got JSON-RPC error response 00:10:02.770 response: 00:10:02.770 { 00:10:02.770 "code": -32602, 00:10:02.770 "message": "Invalid parameters" 00:10:02.770 }' 00:10:02.770 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:02.770 { 00:10:02.770 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:02.770 "listen_address": { 00:10:02.770 "trtype": "rdma", 00:10:02.770 "traddr": "192.168.100.8", 00:10:02.770 "trsvcid": "4421" 00:10:02.770 }, 00:10:02.770 "method": "nvmf_subsystem_remove_listener", 00:10:02.770 "req_id": 1 00:10:02.770 } 00:10:02.770 Got JSON-RPC error response 00:10:02.770 response: 00:10:02.770 { 00:10:02.770 "code": -32602, 00:10:02.770 "message": "Invalid parameters" 00:10:02.770 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:02.770 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27226 -i 0 00:10:03.029 [2024-05-15 12:53:40.653766] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27226: invalid cntlid range [0-65519] 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:03.029 { 00:10:03.029 "nqn": "nqn.2016-06.io.spdk:cnode27226", 00:10:03.029 "min_cntlid": 0, 00:10:03.029 "method": "nvmf_create_subsystem", 00:10:03.029 "req_id": 1 00:10:03.029 } 00:10:03.029 Got JSON-RPC error response 00:10:03.029 response: 00:10:03.029 { 00:10:03.029 "code": -32602, 00:10:03.029 "message": "Invalid cntlid range [0-65519]" 00:10:03.029 }' 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:03.029 { 00:10:03.029 "nqn": "nqn.2016-06.io.spdk:cnode27226", 00:10:03.029 "min_cntlid": 0, 00:10:03.029 "method": "nvmf_create_subsystem", 00:10:03.029 "req_id": 1 00:10:03.029 } 00:10:03.029 Got JSON-RPC error response 00:10:03.029 response: 00:10:03.029 { 00:10:03.029 "code": -32602, 00:10:03.029 "message": "Invalid cntlid range [0-65519]" 00:10:03.029 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6019 -i 65520 00:10:03.029 [2024-05-15 12:53:40.846457] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6019: invalid cntlid range [65520-65519] 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:03.029 { 00:10:03.029 "nqn": "nqn.2016-06.io.spdk:cnode6019", 00:10:03.029 "min_cntlid": 65520, 00:10:03.029 "method": "nvmf_create_subsystem", 00:10:03.029 "req_id": 1 00:10:03.029 } 00:10:03.029 Got JSON-RPC error response 00:10:03.029 response: 00:10:03.029 { 00:10:03.029 "code": -32602, 00:10:03.029 "message": "Invalid cntlid range [65520-65519]" 00:10:03.029 }' 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:03.029 { 00:10:03.029 "nqn": "nqn.2016-06.io.spdk:cnode6019", 00:10:03.029 "min_cntlid": 65520, 00:10:03.029 "method": "nvmf_create_subsystem", 00:10:03.029 "req_id": 1 00:10:03.029 } 00:10:03.029 Got JSON-RPC error response 00:10:03.029 response: 00:10:03.029 { 00:10:03.029 "code": -32602, 00:10:03.029 "message": "Invalid cntlid range [65520-65519]" 00:10:03.029 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:03.029 12:53:40 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27577 -I 0 00:10:03.287 [2024-05-15 12:53:41.051181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27577: invalid cntlid range [1-0] 00:10:03.287 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:03.287 { 00:10:03.287 "nqn": "nqn.2016-06.io.spdk:cnode27577", 00:10:03.287 "max_cntlid": 0, 00:10:03.287 "method": "nvmf_create_subsystem", 00:10:03.287 "req_id": 1 00:10:03.287 } 00:10:03.287 Got JSON-RPC error response 00:10:03.287 response: 00:10:03.287 { 00:10:03.287 "code": -32602, 00:10:03.287 "message": "Invalid cntlid range [1-0]" 00:10:03.287 }' 00:10:03.287 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:03.287 { 00:10:03.287 "nqn": "nqn.2016-06.io.spdk:cnode27577", 00:10:03.287 "max_cntlid": 0, 00:10:03.287 "method": "nvmf_create_subsystem", 00:10:03.287 "req_id": 1 00:10:03.287 } 00:10:03.287 Got JSON-RPC error response 00:10:03.287 response: 00:10:03.287 { 00:10:03.287 "code": -32602, 00:10:03.287 "message": "Invalid cntlid range [1-0]" 00:10:03.287 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:03.287 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26378 -I 65520 00:10:03.546 [2024-05-15 12:53:41.243896] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26378: invalid cntlid range [1-65520] 00:10:03.546 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:03.546 { 00:10:03.546 "nqn": "nqn.2016-06.io.spdk:cnode26378", 00:10:03.546 "max_cntlid": 65520, 00:10:03.546 "method": "nvmf_create_subsystem", 00:10:03.546 "req_id": 1 00:10:03.546 } 00:10:03.546 Got JSON-RPC error response 00:10:03.546 response: 00:10:03.546 { 00:10:03.546 "code": -32602, 00:10:03.546 "message": "Invalid cntlid range [1-65520]" 00:10:03.546 }' 00:10:03.546 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:03.546 { 00:10:03.546 "nqn": "nqn.2016-06.io.spdk:cnode26378", 00:10:03.546 "max_cntlid": 65520, 00:10:03.546 "method": "nvmf_create_subsystem", 00:10:03.546 "req_id": 1 00:10:03.546 } 00:10:03.546 Got JSON-RPC error response 00:10:03.546 response: 00:10:03.546 { 00:10:03.546 "code": -32602, 00:10:03.546 "message": "Invalid cntlid range [1-65520]" 00:10:03.546 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:03.546 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5821 -i 6 -I 5 00:10:03.546 [2024-05-15 12:53:41.424580] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5821: invalid cntlid range [6-5] 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:03.805 { 00:10:03.805 "nqn": "nqn.2016-06.io.spdk:cnode5821", 00:10:03.805 "min_cntlid": 6, 00:10:03.805 "max_cntlid": 5, 00:10:03.805 "method": "nvmf_create_subsystem", 00:10:03.805 "req_id": 1 00:10:03.805 } 00:10:03.805 Got JSON-RPC error response 00:10:03.805 response: 00:10:03.805 { 00:10:03.805 "code": -32602, 00:10:03.805 "message": "Invalid cntlid range [6-5]" 00:10:03.805 }' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:03.805 { 00:10:03.805 "nqn": "nqn.2016-06.io.spdk:cnode5821", 00:10:03.805 "min_cntlid": 6, 00:10:03.805 "max_cntlid": 5, 00:10:03.805 "method": "nvmf_create_subsystem", 00:10:03.805 "req_id": 1 00:10:03.805 } 00:10:03.805 Got JSON-RPC error response 00:10:03.805 response: 00:10:03.805 { 00:10:03.805 "code": -32602, 00:10:03.805 "message": "Invalid cntlid range [6-5]" 00:10:03.805 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:03.805 { 00:10:03.805 "name": "foobar", 00:10:03.805 "method": "nvmf_delete_target", 00:10:03.805 "req_id": 1 00:10:03.805 } 00:10:03.805 Got JSON-RPC error response 00:10:03.805 response: 00:10:03.805 { 00:10:03.805 "code": -32602, 00:10:03.805 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:03.805 }' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:03.805 { 00:10:03.805 "name": "foobar", 00:10:03.805 "method": "nvmf_delete_target", 00:10:03.805 "req_id": 1 00:10:03.805 } 00:10:03.805 Got JSON-RPC error response 00:10:03.805 response: 00:10:03.805 { 00:10:03.805 "code": -32602, 00:10:03.805 "message": "The specified target doesn't exist, cannot delete it." 00:10:03.805 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:03.805 rmmod nvme_rdma 00:10:03.805 rmmod nvme_fabrics 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3531834 ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3531834 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3531834 ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3531834 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3531834 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3531834' 00:10:03.805 killing process with pid 3531834 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3531834 00:10:03.805 [2024-05-15 12:53:41.663962] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:03.805 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3531834 00:10:04.064 [2024-05-15 12:53:41.752857] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:04.323 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.323 12:53:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:04.323 00:10:04.323 real 0m10.578s 00:10:04.323 user 0m21.380s 00:10:04.323 sys 0m5.601s 00:10:04.323 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:04.323 12:53:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:04.323 ************************************ 00:10:04.323 END TEST nvmf_invalid 00:10:04.323 ************************************ 00:10:04.324 12:53:42 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:04.324 12:53:42 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:04.324 12:53:42 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:04.324 12:53:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:04.324 ************************************ 00:10:04.324 START TEST nvmf_abort 00:10:04.324 ************************************ 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:04.324 * Looking for test storage... 00:10:04.324 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:04.324 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.583 12:53:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:11.151 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:11.151 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.151 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:11.152 Found net devices under 0000:18:00.0: mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:11.152 Found net devices under 0000:18:00.1: mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:11.152 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.152 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:11.152 altname enp24s0f0np0 00:10:11.152 altname ens785f0np0 00:10:11.152 inet 192.168.100.8/24 scope global mlx_0_0 00:10:11.152 valid_lft forever preferred_lft forever 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:11.152 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.152 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:11.152 altname enp24s0f1np1 00:10:11.152 altname ens785f1np1 00:10:11.152 inet 192.168.100.9/24 scope global mlx_0_1 00:10:11.152 valid_lft forever preferred_lft forever 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:11.152 192.168.100.9' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:11.152 192.168.100.9' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:11.152 192.168.100.9' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:11.152 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3535456 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3535456 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3535456 ']' 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:11.153 12:53:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.153 [2024-05-15 12:53:48.368810] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:10:11.153 [2024-05-15 12:53:48.368864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.153 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.153 [2024-05-15 12:53:48.441745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.153 [2024-05-15 12:53:48.528258] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.153 [2024-05-15 12:53:48.528302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.153 [2024-05-15 12:53:48.528311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.153 [2024-05-15 12:53:48.528319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.153 [2024-05-15 12:53:48.528326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.153 [2024-05-15 12:53:48.528435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.153 [2024-05-15 12:53:48.528512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.153 [2024-05-15 12:53:48.528514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.411 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.411 [2024-05-15 12:53:49.262814] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d1d880/0x1d21d70) succeed. 00:10:11.411 [2024-05-15 12:53:49.273376] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d1ee20/0x1d63400) succeed. 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 Malloc0 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 Delay0 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 [2024-05-15 12:53:49.439196] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:11.669 [2024-05-15 12:53:49.439552] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.669 12:53:49 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:11.669 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.669 [2024-05-15 12:53:49.532118] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.203 Initializing NVMe Controllers 00:10:14.203 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:14.203 controller IO queue size 128 less than required 00:10:14.203 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:14.203 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:14.203 Initialization complete. Launching workers. 00:10:14.203 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 50594 00:10:14.203 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 50655, failed to submit 62 00:10:14.203 success 50595, unsuccess 60, failed 0 00:10:14.203 12:53:51 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:14.204 rmmod nvme_rdma 00:10:14.204 rmmod nvme_fabrics 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3535456 ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3535456 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3535456 ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3535456 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3535456 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3535456' 00:10:14.204 killing process with pid 3535456 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3535456 00:10:14.204 [2024-05-15 12:53:51.752052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:14.204 12:53:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3535456 00:10:14.204 [2024-05-15 12:53:51.824695] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:14.204 12:53:52 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.204 12:53:52 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:14.204 00:10:14.204 real 0m10.000s 00:10:14.204 user 0m14.435s 00:10:14.204 sys 0m5.131s 00:10:14.204 12:53:52 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.204 12:53:52 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:14.204 ************************************ 00:10:14.204 END TEST nvmf_abort 00:10:14.204 ************************************ 00:10:14.466 12:53:52 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:14.466 12:53:52 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:14.466 12:53:52 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.466 12:53:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:14.466 ************************************ 00:10:14.466 START TEST nvmf_ns_hotplug_stress 00:10:14.466 ************************************ 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:14.466 * Looking for test storage... 00:10:14.466 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.466 12:53:52 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:21.038 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:21.038 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:21.038 12:53:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:21.038 Found net devices under 0000:18:00.0: mlx_0_0 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:21.038 Found net devices under 0000:18:00.1: mlx_0_1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:21.038 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:21.038 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:21.038 altname enp24s0f0np0 00:10:21.038 altname ens785f0np0 00:10:21.038 inet 192.168.100.8/24 scope global mlx_0_0 00:10:21.038 valid_lft forever preferred_lft forever 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:21.038 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:21.039 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:21.039 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:21.039 altname enp24s0f1np1 00:10:21.039 altname ens785f1np1 00:10:21.039 inet 192.168.100.9/24 scope global mlx_0_1 00:10:21.039 valid_lft forever preferred_lft forever 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:21.039 192.168.100.9' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:21.039 192.168.100.9' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:21.039 192.168.100.9' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3538744 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3538744 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3538744 ']' 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:21.039 12:53:58 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.039 [2024-05-15 12:53:58.288216] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:10:21.039 [2024-05-15 12:53:58.288292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.039 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.039 [2024-05-15 12:53:58.361103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.039 [2024-05-15 12:53:58.447891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.039 [2024-05-15 12:53:58.447938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.039 [2024-05-15 12:53:58.447947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.039 [2024-05-15 12:53:58.447956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.039 [2024-05-15 12:53:58.447963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.039 [2024-05-15 12:53:58.448073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.039 [2024-05-15 12:53:58.448141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.039 [2024-05-15 12:53:58.448143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:21.298 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:21.558 [2024-05-15 12:53:59.337952] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1953880/0x1957d70) succeed. 00:10:21.558 [2024-05-15 12:53:59.348191] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1954e20/0x1999400) succeed. 00:10:21.817 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.817 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:22.076 [2024-05-15 12:53:59.819929] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:22.076 [2024-05-15 12:53:59.820277] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:22.076 12:53:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:22.334 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:22.334 Malloc0 00:10:22.593 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.593 Delay0 00:10:22.593 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.851 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:23.109 NULL1 00:10:23.109 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:23.368 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3539206 00:10:23.368 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:23.368 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:23.368 12:54:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.368 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.305 Read completed with error (sct=0, sc=11) 00:10:24.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.305 12:54:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.564 12:54:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:24.564 12:54:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:24.823 true 00:10:24.823 12:54:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:24.823 12:54:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 12:54:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.762 12:54:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:25.762 12:54:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:26.020 true 00:10:26.020 12:54:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:26.020 12:54:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.957 12:54:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.958 12:54:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:26.958 12:54:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:27.216 true 00:10:27.216 12:54:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:27.216 12:54:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 12:54:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.204 12:54:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:28.204 12:54:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:28.463 true 00:10:28.463 12:54:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:28.463 12:54:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 12:54:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.400 12:54:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:29.400 12:54:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:29.659 true 00:10:29.659 12:54:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:29.659 12:54:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.595 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.595 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:30.595 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:30.854 true 00:10:30.854 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:30.854 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.112 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.112 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:31.112 12:54:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:31.371 true 00:10:31.371 12:54:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:31.371 12:54:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 12:54:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.747 12:54:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:32.747 12:54:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:32.747 true 00:10:33.007 12:54:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:33.007 12:54:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.573 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 12:54:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.831 12:54:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:33.831 12:54:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:34.089 true 00:10:34.089 12:54:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:34.089 12:54:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 12:54:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.034 12:54:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:35.034 12:54:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:35.293 true 00:10:35.293 12:54:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:35.293 12:54:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 12:54:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.231 12:54:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:36.231 12:54:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:36.488 true 00:10:36.488 12:54:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:36.488 12:54:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 12:54:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.425 12:54:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:37.425 12:54:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:37.683 true 00:10:37.683 12:54:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:37.683 12:54:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.620 12:54:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.620 12:54:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:38.620 12:54:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:38.878 true 00:10:38.878 12:54:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:38.878 12:54:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 12:54:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.814 12:54:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:39.814 12:54:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:40.073 true 00:10:40.073 12:54:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:40.073 12:54:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 12:54:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.009 12:54:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:41.009 12:54:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:41.267 true 00:10:41.267 12:54:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:41.267 12:54:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 12:54:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.202 12:54:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:42.202 12:54:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:42.461 true 00:10:42.461 12:54:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:42.461 12:54:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 12:54:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.396 12:54:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:43.396 12:54:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:43.653 true 00:10:43.653 12:54:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:43.653 12:54:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.590 12:54:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.849 12:54:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:44.849 12:54:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:44.849 true 00:10:44.849 12:54:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:44.849 12:54:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.786 12:54:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.044 12:54:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:46.044 12:54:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:46.303 true 00:10:46.303 12:54:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:46.303 12:54:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 12:54:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.239 12:54:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:47.239 12:54:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:47.498 true 00:10:47.498 12:54:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:47.498 12:54:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 12:54:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.434 12:54:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:48.434 12:54:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:48.693 true 00:10:48.693 12:54:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:48.693 12:54:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 12:54:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.628 12:54:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:49.628 12:54:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:49.628 true 00:10:49.888 12:54:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:49.888 12:54:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 12:54:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.714 12:54:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:50.714 12:54:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:50.972 true 00:10:50.972 12:54:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:50.972 12:54:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 12:54:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.260 12:54:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:52.260 12:54:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:52.260 true 00:10:52.260 12:54:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:52.260 12:54:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 12:54:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.196 12:54:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:53.196 12:54:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:53.456 true 00:10:53.456 12:54:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:53.456 12:54:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.392 12:54:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.392 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:54.392 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:54.650 true 00:10:54.650 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:54.650 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.908 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.908 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:54.908 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:55.167 true 00:10:55.167 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:55.167 12:54:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.426 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.426 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:55.426 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:55.685 true 00:10:55.685 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:55.685 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.942 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.200 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:56.200 12:54:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:56.200 Initializing NVMe Controllers 00:10:56.200 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.200 Controller IO queue size 128, less than required. 00:10:56.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.200 Controller IO queue size 128, less than required. 00:10:56.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:56.200 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:56.200 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:56.200 Initialization complete. Launching workers. 00:10:56.200 ======================================================== 00:10:56.200 Latency(us) 00:10:56.200 Device Information : IOPS MiB/s Average min max 00:10:56.200 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5531.77 2.70 20864.07 885.96 1138153.46 00:10:56.200 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32669.13 15.95 3917.94 2173.42 294092.95 00:10:56.200 ======================================================== 00:10:56.200 Total : 38200.90 18.65 6371.86 885.96 1138153.46 00:10:56.200 00:10:56.200 true 00:10:56.200 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3539206 00:10:56.200 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3539206) - No such process 00:10:56.200 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3539206 00:10:56.200 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.459 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:56.717 null0 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.717 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:56.986 null1 00:10:56.986 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.986 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.986 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:57.249 null2 00:10:57.249 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.249 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.249 12:54:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:57.249 null3 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:57.507 null4 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.507 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:57.766 null5 00:10:57.766 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.766 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.766 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:58.026 null6 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:58.026 null7 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.026 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3544268 3544269 3544274 3544276 3544279 3544281 3544284 3544286 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.027 12:54:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.285 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.286 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.545 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.804 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.805 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.064 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.323 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.323 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.582 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.583 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.841 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.100 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.358 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.358 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.358 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.358 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.358 12:54:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.358 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.359 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.618 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.878 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.139 12:54:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.397 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.657 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:01.917 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:01.918 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.918 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:01.918 rmmod nvme_rdma 00:11:01.918 rmmod nvme_fabrics 00:11:01.918 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3538744 ']' 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3538744 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3538744 ']' 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3538744 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3538744 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3538744' 00:11:02.177 killing process with pid 3538744 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3538744 00:11:02.177 [2024-05-15 12:54:39.844321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:02.177 12:54:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3538744 00:11:02.177 [2024-05-15 12:54:39.913695] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:02.436 12:54:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.436 12:54:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:02.436 00:11:02.436 real 0m47.972s 00:11:02.436 user 3m19.089s 00:11:02.436 sys 0m13.873s 00:11:02.436 12:54:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.436 12:54:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.436 ************************************ 00:11:02.436 END TEST nvmf_ns_hotplug_stress 00:11:02.436 ************************************ 00:11:02.436 12:54:40 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:02.436 12:54:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:02.436 12:54:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.436 12:54:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:02.436 ************************************ 00:11:02.436 START TEST nvmf_connect_stress 00:11:02.436 ************************************ 00:11:02.436 12:54:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:02.436 * Looking for test storage... 00:11:02.695 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.695 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.696 12:54:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:09.267 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:09.267 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:09.267 Found net devices under 0000:18:00.0: mlx_0_0 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.267 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:09.268 Found net devices under 0000:18:00.1: mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:09.268 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:09.268 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:09.268 altname enp24s0f0np0 00:11:09.268 altname ens785f0np0 00:11:09.268 inet 192.168.100.8/24 scope global mlx_0_0 00:11:09.268 valid_lft forever preferred_lft forever 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:09.268 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:09.268 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:09.268 altname enp24s0f1np1 00:11:09.268 altname ens785f1np1 00:11:09.268 inet 192.168.100.9/24 scope global mlx_0_1 00:11:09.268 valid_lft forever preferred_lft forever 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:09.268 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:09.269 192.168.100.9' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:09.269 192.168.100.9' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:09.269 192.168.100.9' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3547869 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3547869 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3547869 ']' 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:09.269 12:54:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.269 [2024-05-15 12:54:46.666841] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:11:09.269 [2024-05-15 12:54:46.666898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.269 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.269 [2024-05-15 12:54:46.741647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.269 [2024-05-15 12:54:46.827936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.269 [2024-05-15 12:54:46.827983] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.269 [2024-05-15 12:54:46.827992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.269 [2024-05-15 12:54:46.828001] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.269 [2024-05-15 12:54:46.828008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.269 [2024-05-15 12:54:46.828117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.269 [2024-05-15 12:54:46.828193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.269 [2024-05-15 12:54:46.828194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 [2024-05-15 12:54:47.561109] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f79880/0x1f7dd70) succeed. 00:11:09.837 [2024-05-15 12:54:47.571449] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f7ae20/0x1fbf400) succeed. 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 [2024-05-15 12:54:47.688652] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:09.837 [2024-05-15 12:54:47.689005] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.837 NULL1 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3548069 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.837 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:09.838 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.838 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:09.838 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.838 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.096 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.097 12:54:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.355 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.355 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:10.355 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.355 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.355 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.614 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:10.614 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.614 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.614 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.185 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.185 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:11.185 12:54:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.185 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.185 12:54:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.445 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.445 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:11.445 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.445 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.445 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.703 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.703 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:11.703 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.704 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.704 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.962 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.962 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:11.962 12:54:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.962 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.962 12:54:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.221 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.221 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:12.221 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.221 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.221 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.788 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.788 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:12.788 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.788 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.788 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.047 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.047 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:13.047 12:54:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.047 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.047 12:54:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.307 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.307 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:13.307 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.307 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.307 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.566 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.566 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:13.566 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.566 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.566 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.134 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.134 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:14.134 12:54:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.134 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.134 12:54:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.393 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.393 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:14.393 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.393 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.393 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.653 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.653 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:14.653 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.653 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.653 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.913 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.913 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:14.913 12:54:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.913 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.913 12:54:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.172 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.172 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:15.172 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.172 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.172 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.740 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.740 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:15.740 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.740 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.740 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.999 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.999 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:15.999 12:54:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.999 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.999 12:54:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.260 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.260 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:16.260 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.260 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.260 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.519 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.519 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:16.519 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.519 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.519 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.162 12:54:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.730 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.730 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:17.730 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.730 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.730 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.988 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.988 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:17.988 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.988 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.988 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.246 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.246 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:18.246 12:54:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.246 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.246 12:54:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.504 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.504 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:18.504 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.504 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.504 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.761 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.761 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:18.761 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.761 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.761 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.328 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.328 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:19.328 12:54:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.328 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.328 12:54:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.595 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.595 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:19.595 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.595 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.595 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.852 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.852 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:19.852 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.852 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.852 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.111 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3548069 00:11:20.111 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3548069) - No such process 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3548069 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:20.111 rmmod nvme_rdma 00:11:20.111 rmmod nvme_fabrics 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3547869 ']' 00:11:20.111 12:54:57 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3547869 00:11:20.370 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3547869 ']' 00:11:20.370 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3547869 00:11:20.370 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:11:20.370 12:54:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3547869 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3547869' 00:11:20.370 killing process with pid 3547869 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3547869 00:11:20.370 [2024-05-15 12:54:58.047629] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:20.370 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3547869 00:11:20.370 [2024-05-15 12:54:58.120191] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:20.628 12:54:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.628 12:54:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:20.628 00:11:20.628 real 0m18.119s 00:11:20.628 user 0m41.716s 00:11:20.628 sys 0m7.438s 00:11:20.628 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:20.628 12:54:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 ************************************ 00:11:20.628 END TEST nvmf_connect_stress 00:11:20.628 ************************************ 00:11:20.628 12:54:58 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:20.628 12:54:58 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:20.628 12:54:58 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:20.628 12:54:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 ************************************ 00:11:20.628 START TEST nvmf_fused_ordering 00:11:20.628 ************************************ 00:11:20.628 12:54:58 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:20.887 * Looking for test storage... 00:11:20.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.887 12:54:58 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.888 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.888 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.888 12:54:58 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.888 12:54:58 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:27.462 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:27.462 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:27.462 Found net devices under 0000:18:00.0: mlx_0_0 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:27.462 Found net devices under 0000:18:00.1: mlx_0_1 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.462 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:27.463 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.463 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:27.463 altname enp24s0f0np0 00:11:27.463 altname ens785f0np0 00:11:27.463 inet 192.168.100.8/24 scope global mlx_0_0 00:11:27.463 valid_lft forever preferred_lft forever 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:27.463 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.463 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:27.463 altname enp24s0f1np1 00:11:27.463 altname ens785f1np1 00:11:27.463 inet 192.168.100.9/24 scope global mlx_0_1 00:11:27.463 valid_lft forever preferred_lft forever 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:27.463 192.168.100.9' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:27.463 192.168.100.9' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:27.463 192.168.100.9' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3552271 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3552271 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3552271 ']' 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:27.463 12:55:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.463 [2024-05-15 12:55:04.504044] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:11:27.463 [2024-05-15 12:55:04.504103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.463 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.463 [2024-05-15 12:55:04.573975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.463 [2024-05-15 12:55:04.659277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.463 [2024-05-15 12:55:04.659311] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.464 [2024-05-15 12:55:04.659324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.464 [2024-05-15 12:55:04.659332] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.464 [2024-05-15 12:55:04.659339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.464 [2024-05-15 12:55:04.659359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.464 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:27.464 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:11:27.464 12:55:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.464 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.464 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 [2024-05-15 12:55:05.379319] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6d4300/0x6d87f0) succeed. 00:11:27.724 [2024-05-15 12:55:05.388114] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6d5800/0x719e80) succeed. 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 [2024-05-15 12:55:05.444607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:27.724 [2024-05-15 12:55:05.444942] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 NULL1 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.724 12:55:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:27.724 [2024-05-15 12:55:05.501427] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:11:27.724 [2024-05-15 12:55:05.501464] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552464 ] 00:11:27.725 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.985 Attached to nqn.2016-06.io.spdk:cnode1 00:11:27.985 Namespace ID: 1 size: 1GB 00:11:27.985 fused_ordering(0) 00:11:27.985 fused_ordering(1) 00:11:27.985 fused_ordering(2) 00:11:27.985 fused_ordering(3) 00:11:27.985 fused_ordering(4) 00:11:27.985 fused_ordering(5) 00:11:27.985 fused_ordering(6) 00:11:27.985 fused_ordering(7) 00:11:27.985 fused_ordering(8) 00:11:27.985 fused_ordering(9) 00:11:27.985 fused_ordering(10) 00:11:27.985 fused_ordering(11) 00:11:27.985 fused_ordering(12) 00:11:27.985 fused_ordering(13) 00:11:27.985 fused_ordering(14) 00:11:27.985 fused_ordering(15) 00:11:27.985 fused_ordering(16) 00:11:27.985 fused_ordering(17) 00:11:27.985 fused_ordering(18) 00:11:27.985 fused_ordering(19) 00:11:27.985 fused_ordering(20) 00:11:27.985 fused_ordering(21) 00:11:27.985 fused_ordering(22) 00:11:27.985 fused_ordering(23) 00:11:27.985 fused_ordering(24) 00:11:27.985 fused_ordering(25) 00:11:27.985 fused_ordering(26) 00:11:27.985 fused_ordering(27) 00:11:27.985 fused_ordering(28) 00:11:27.985 fused_ordering(29) 00:11:27.985 fused_ordering(30) 00:11:27.985 fused_ordering(31) 00:11:27.985 fused_ordering(32) 00:11:27.985 fused_ordering(33) 00:11:27.985 fused_ordering(34) 00:11:27.985 fused_ordering(35) 00:11:27.985 fused_ordering(36) 00:11:27.985 fused_ordering(37) 00:11:27.985 fused_ordering(38) 00:11:27.985 fused_ordering(39) 00:11:27.985 fused_ordering(40) 00:11:27.985 fused_ordering(41) 00:11:27.985 fused_ordering(42) 00:11:27.985 fused_ordering(43) 00:11:27.985 fused_ordering(44) 00:11:27.985 fused_ordering(45) 00:11:27.985 fused_ordering(46) 00:11:27.985 fused_ordering(47) 00:11:27.985 fused_ordering(48) 00:11:27.985 fused_ordering(49) 00:11:27.985 fused_ordering(50) 00:11:27.985 fused_ordering(51) 00:11:27.985 fused_ordering(52) 00:11:27.985 fused_ordering(53) 00:11:27.985 fused_ordering(54) 00:11:27.985 fused_ordering(55) 00:11:27.985 fused_ordering(56) 00:11:27.985 fused_ordering(57) 00:11:27.985 fused_ordering(58) 00:11:27.985 fused_ordering(59) 00:11:27.985 fused_ordering(60) 00:11:27.985 fused_ordering(61) 00:11:27.985 fused_ordering(62) 00:11:27.985 fused_ordering(63) 00:11:27.985 fused_ordering(64) 00:11:27.985 fused_ordering(65) 00:11:27.985 fused_ordering(66) 00:11:27.985 fused_ordering(67) 00:11:27.985 fused_ordering(68) 00:11:27.985 fused_ordering(69) 00:11:27.985 fused_ordering(70) 00:11:27.985 fused_ordering(71) 00:11:27.985 fused_ordering(72) 00:11:27.985 fused_ordering(73) 00:11:27.985 fused_ordering(74) 00:11:27.985 fused_ordering(75) 00:11:27.985 fused_ordering(76) 00:11:27.985 fused_ordering(77) 00:11:27.985 fused_ordering(78) 00:11:27.985 fused_ordering(79) 00:11:27.985 fused_ordering(80) 00:11:27.986 fused_ordering(81) 00:11:27.986 fused_ordering(82) 00:11:27.986 fused_ordering(83) 00:11:27.986 fused_ordering(84) 00:11:27.986 fused_ordering(85) 00:11:27.986 fused_ordering(86) 00:11:27.986 fused_ordering(87) 00:11:27.986 fused_ordering(88) 00:11:27.986 fused_ordering(89) 00:11:27.986 fused_ordering(90) 00:11:27.986 fused_ordering(91) 00:11:27.986 fused_ordering(92) 00:11:27.986 fused_ordering(93) 00:11:27.986 fused_ordering(94) 00:11:27.986 fused_ordering(95) 00:11:27.986 fused_ordering(96) 00:11:27.986 fused_ordering(97) 00:11:27.986 fused_ordering(98) 00:11:27.986 fused_ordering(99) 00:11:27.986 fused_ordering(100) 00:11:27.986 fused_ordering(101) 00:11:27.986 fused_ordering(102) 00:11:27.986 fused_ordering(103) 00:11:27.986 fused_ordering(104) 00:11:27.986 fused_ordering(105) 00:11:27.986 fused_ordering(106) 00:11:27.986 fused_ordering(107) 00:11:27.986 fused_ordering(108) 00:11:27.986 fused_ordering(109) 00:11:27.986 fused_ordering(110) 00:11:27.986 fused_ordering(111) 00:11:27.986 fused_ordering(112) 00:11:27.986 fused_ordering(113) 00:11:27.986 fused_ordering(114) 00:11:27.986 fused_ordering(115) 00:11:27.986 fused_ordering(116) 00:11:27.986 fused_ordering(117) 00:11:27.986 fused_ordering(118) 00:11:27.986 fused_ordering(119) 00:11:27.986 fused_ordering(120) 00:11:27.986 fused_ordering(121) 00:11:27.986 fused_ordering(122) 00:11:27.986 fused_ordering(123) 00:11:27.986 fused_ordering(124) 00:11:27.986 fused_ordering(125) 00:11:27.986 fused_ordering(126) 00:11:27.986 fused_ordering(127) 00:11:27.986 fused_ordering(128) 00:11:27.986 fused_ordering(129) 00:11:27.986 fused_ordering(130) 00:11:27.986 fused_ordering(131) 00:11:27.986 fused_ordering(132) 00:11:27.986 fused_ordering(133) 00:11:27.986 fused_ordering(134) 00:11:27.986 fused_ordering(135) 00:11:27.986 fused_ordering(136) 00:11:27.986 fused_ordering(137) 00:11:27.986 fused_ordering(138) 00:11:27.986 fused_ordering(139) 00:11:27.986 fused_ordering(140) 00:11:27.986 fused_ordering(141) 00:11:27.986 fused_ordering(142) 00:11:27.986 fused_ordering(143) 00:11:27.986 fused_ordering(144) 00:11:27.986 fused_ordering(145) 00:11:27.986 fused_ordering(146) 00:11:27.986 fused_ordering(147) 00:11:27.986 fused_ordering(148) 00:11:27.986 fused_ordering(149) 00:11:27.986 fused_ordering(150) 00:11:27.986 fused_ordering(151) 00:11:27.986 fused_ordering(152) 00:11:27.986 fused_ordering(153) 00:11:27.986 fused_ordering(154) 00:11:27.986 fused_ordering(155) 00:11:27.986 fused_ordering(156) 00:11:27.986 fused_ordering(157) 00:11:27.986 fused_ordering(158) 00:11:27.986 fused_ordering(159) 00:11:27.986 fused_ordering(160) 00:11:27.986 fused_ordering(161) 00:11:27.986 fused_ordering(162) 00:11:27.986 fused_ordering(163) 00:11:27.986 fused_ordering(164) 00:11:27.986 fused_ordering(165) 00:11:27.986 fused_ordering(166) 00:11:27.986 fused_ordering(167) 00:11:27.986 fused_ordering(168) 00:11:27.986 fused_ordering(169) 00:11:27.986 fused_ordering(170) 00:11:27.986 fused_ordering(171) 00:11:27.986 fused_ordering(172) 00:11:27.986 fused_ordering(173) 00:11:27.986 fused_ordering(174) 00:11:27.986 fused_ordering(175) 00:11:27.986 fused_ordering(176) 00:11:27.986 fused_ordering(177) 00:11:27.986 fused_ordering(178) 00:11:27.986 fused_ordering(179) 00:11:27.986 fused_ordering(180) 00:11:27.986 fused_ordering(181) 00:11:27.986 fused_ordering(182) 00:11:27.986 fused_ordering(183) 00:11:27.986 fused_ordering(184) 00:11:27.986 fused_ordering(185) 00:11:27.986 fused_ordering(186) 00:11:27.986 fused_ordering(187) 00:11:27.986 fused_ordering(188) 00:11:27.986 fused_ordering(189) 00:11:27.986 fused_ordering(190) 00:11:27.986 fused_ordering(191) 00:11:27.986 fused_ordering(192) 00:11:27.986 fused_ordering(193) 00:11:27.986 fused_ordering(194) 00:11:27.986 fused_ordering(195) 00:11:27.986 fused_ordering(196) 00:11:27.986 fused_ordering(197) 00:11:27.986 fused_ordering(198) 00:11:27.986 fused_ordering(199) 00:11:27.986 fused_ordering(200) 00:11:27.986 fused_ordering(201) 00:11:27.986 fused_ordering(202) 00:11:27.986 fused_ordering(203) 00:11:27.986 fused_ordering(204) 00:11:27.986 fused_ordering(205) 00:11:27.986 fused_ordering(206) 00:11:27.986 fused_ordering(207) 00:11:27.986 fused_ordering(208) 00:11:27.986 fused_ordering(209) 00:11:27.986 fused_ordering(210) 00:11:27.986 fused_ordering(211) 00:11:27.986 fused_ordering(212) 00:11:27.986 fused_ordering(213) 00:11:27.986 fused_ordering(214) 00:11:27.986 fused_ordering(215) 00:11:27.986 fused_ordering(216) 00:11:27.986 fused_ordering(217) 00:11:27.986 fused_ordering(218) 00:11:27.986 fused_ordering(219) 00:11:27.986 fused_ordering(220) 00:11:27.986 fused_ordering(221) 00:11:27.986 fused_ordering(222) 00:11:27.986 fused_ordering(223) 00:11:27.986 fused_ordering(224) 00:11:27.986 fused_ordering(225) 00:11:27.986 fused_ordering(226) 00:11:27.986 fused_ordering(227) 00:11:27.986 fused_ordering(228) 00:11:27.986 fused_ordering(229) 00:11:27.986 fused_ordering(230) 00:11:27.986 fused_ordering(231) 00:11:27.986 fused_ordering(232) 00:11:27.986 fused_ordering(233) 00:11:27.986 fused_ordering(234) 00:11:27.986 fused_ordering(235) 00:11:27.986 fused_ordering(236) 00:11:27.986 fused_ordering(237) 00:11:27.986 fused_ordering(238) 00:11:27.986 fused_ordering(239) 00:11:27.986 fused_ordering(240) 00:11:27.986 fused_ordering(241) 00:11:27.986 fused_ordering(242) 00:11:27.986 fused_ordering(243) 00:11:27.986 fused_ordering(244) 00:11:27.986 fused_ordering(245) 00:11:27.986 fused_ordering(246) 00:11:27.986 fused_ordering(247) 00:11:27.986 fused_ordering(248) 00:11:27.986 fused_ordering(249) 00:11:27.986 fused_ordering(250) 00:11:27.986 fused_ordering(251) 00:11:27.986 fused_ordering(252) 00:11:27.986 fused_ordering(253) 00:11:27.986 fused_ordering(254) 00:11:27.986 fused_ordering(255) 00:11:27.986 fused_ordering(256) 00:11:27.986 fused_ordering(257) 00:11:27.986 fused_ordering(258) 00:11:27.986 fused_ordering(259) 00:11:27.986 fused_ordering(260) 00:11:27.986 fused_ordering(261) 00:11:27.986 fused_ordering(262) 00:11:27.986 fused_ordering(263) 00:11:27.986 fused_ordering(264) 00:11:27.986 fused_ordering(265) 00:11:27.986 fused_ordering(266) 00:11:27.986 fused_ordering(267) 00:11:27.986 fused_ordering(268) 00:11:27.986 fused_ordering(269) 00:11:27.986 fused_ordering(270) 00:11:27.986 fused_ordering(271) 00:11:27.986 fused_ordering(272) 00:11:27.986 fused_ordering(273) 00:11:27.986 fused_ordering(274) 00:11:27.986 fused_ordering(275) 00:11:27.986 fused_ordering(276) 00:11:27.986 fused_ordering(277) 00:11:27.986 fused_ordering(278) 00:11:27.986 fused_ordering(279) 00:11:27.986 fused_ordering(280) 00:11:27.986 fused_ordering(281) 00:11:27.986 fused_ordering(282) 00:11:27.986 fused_ordering(283) 00:11:27.986 fused_ordering(284) 00:11:27.986 fused_ordering(285) 00:11:27.986 fused_ordering(286) 00:11:27.986 fused_ordering(287) 00:11:27.986 fused_ordering(288) 00:11:27.986 fused_ordering(289) 00:11:27.986 fused_ordering(290) 00:11:27.986 fused_ordering(291) 00:11:27.986 fused_ordering(292) 00:11:27.986 fused_ordering(293) 00:11:27.986 fused_ordering(294) 00:11:27.986 fused_ordering(295) 00:11:27.986 fused_ordering(296) 00:11:27.986 fused_ordering(297) 00:11:27.986 fused_ordering(298) 00:11:27.986 fused_ordering(299) 00:11:27.986 fused_ordering(300) 00:11:27.987 fused_ordering(301) 00:11:27.987 fused_ordering(302) 00:11:27.987 fused_ordering(303) 00:11:27.987 fused_ordering(304) 00:11:27.987 fused_ordering(305) 00:11:27.987 fused_ordering(306) 00:11:27.987 fused_ordering(307) 00:11:27.987 fused_ordering(308) 00:11:27.987 fused_ordering(309) 00:11:27.987 fused_ordering(310) 00:11:27.987 fused_ordering(311) 00:11:27.987 fused_ordering(312) 00:11:27.987 fused_ordering(313) 00:11:27.987 fused_ordering(314) 00:11:27.987 fused_ordering(315) 00:11:27.987 fused_ordering(316) 00:11:27.987 fused_ordering(317) 00:11:27.987 fused_ordering(318) 00:11:27.987 fused_ordering(319) 00:11:27.987 fused_ordering(320) 00:11:27.987 fused_ordering(321) 00:11:27.987 fused_ordering(322) 00:11:27.987 fused_ordering(323) 00:11:27.987 fused_ordering(324) 00:11:27.987 fused_ordering(325) 00:11:27.987 fused_ordering(326) 00:11:27.987 fused_ordering(327) 00:11:27.987 fused_ordering(328) 00:11:27.987 fused_ordering(329) 00:11:27.987 fused_ordering(330) 00:11:27.987 fused_ordering(331) 00:11:27.987 fused_ordering(332) 00:11:27.987 fused_ordering(333) 00:11:27.987 fused_ordering(334) 00:11:27.987 fused_ordering(335) 00:11:27.987 fused_ordering(336) 00:11:27.987 fused_ordering(337) 00:11:27.987 fused_ordering(338) 00:11:27.987 fused_ordering(339) 00:11:27.987 fused_ordering(340) 00:11:27.987 fused_ordering(341) 00:11:27.987 fused_ordering(342) 00:11:27.987 fused_ordering(343) 00:11:27.987 fused_ordering(344) 00:11:27.987 fused_ordering(345) 00:11:27.987 fused_ordering(346) 00:11:27.987 fused_ordering(347) 00:11:27.987 fused_ordering(348) 00:11:27.987 fused_ordering(349) 00:11:27.987 fused_ordering(350) 00:11:27.987 fused_ordering(351) 00:11:27.987 fused_ordering(352) 00:11:27.987 fused_ordering(353) 00:11:27.987 fused_ordering(354) 00:11:27.987 fused_ordering(355) 00:11:27.987 fused_ordering(356) 00:11:27.987 fused_ordering(357) 00:11:27.987 fused_ordering(358) 00:11:27.987 fused_ordering(359) 00:11:27.987 fused_ordering(360) 00:11:27.987 fused_ordering(361) 00:11:27.987 fused_ordering(362) 00:11:27.987 fused_ordering(363) 00:11:27.987 fused_ordering(364) 00:11:27.987 fused_ordering(365) 00:11:27.987 fused_ordering(366) 00:11:27.987 fused_ordering(367) 00:11:27.987 fused_ordering(368) 00:11:27.987 fused_ordering(369) 00:11:27.987 fused_ordering(370) 00:11:27.987 fused_ordering(371) 00:11:27.987 fused_ordering(372) 00:11:27.987 fused_ordering(373) 00:11:27.987 fused_ordering(374) 00:11:27.987 fused_ordering(375) 00:11:27.987 fused_ordering(376) 00:11:27.987 fused_ordering(377) 00:11:27.987 fused_ordering(378) 00:11:27.987 fused_ordering(379) 00:11:27.987 fused_ordering(380) 00:11:27.987 fused_ordering(381) 00:11:27.987 fused_ordering(382) 00:11:27.987 fused_ordering(383) 00:11:27.987 fused_ordering(384) 00:11:27.987 fused_ordering(385) 00:11:27.987 fused_ordering(386) 00:11:27.987 fused_ordering(387) 00:11:27.987 fused_ordering(388) 00:11:27.987 fused_ordering(389) 00:11:27.987 fused_ordering(390) 00:11:27.987 fused_ordering(391) 00:11:27.987 fused_ordering(392) 00:11:27.987 fused_ordering(393) 00:11:27.987 fused_ordering(394) 00:11:27.987 fused_ordering(395) 00:11:27.987 fused_ordering(396) 00:11:27.987 fused_ordering(397) 00:11:27.987 fused_ordering(398) 00:11:27.987 fused_ordering(399) 00:11:27.987 fused_ordering(400) 00:11:27.987 fused_ordering(401) 00:11:27.987 fused_ordering(402) 00:11:27.987 fused_ordering(403) 00:11:27.987 fused_ordering(404) 00:11:27.987 fused_ordering(405) 00:11:27.987 fused_ordering(406) 00:11:27.987 fused_ordering(407) 00:11:27.987 fused_ordering(408) 00:11:27.987 fused_ordering(409) 00:11:27.987 fused_ordering(410) 00:11:27.987 fused_ordering(411) 00:11:27.987 fused_ordering(412) 00:11:27.987 fused_ordering(413) 00:11:27.987 fused_ordering(414) 00:11:27.987 fused_ordering(415) 00:11:27.987 fused_ordering(416) 00:11:27.987 fused_ordering(417) 00:11:27.987 fused_ordering(418) 00:11:27.987 fused_ordering(419) 00:11:27.987 fused_ordering(420) 00:11:27.987 fused_ordering(421) 00:11:27.987 fused_ordering(422) 00:11:27.987 fused_ordering(423) 00:11:27.987 fused_ordering(424) 00:11:27.987 fused_ordering(425) 00:11:27.987 fused_ordering(426) 00:11:27.987 fused_ordering(427) 00:11:27.987 fused_ordering(428) 00:11:27.987 fused_ordering(429) 00:11:27.987 fused_ordering(430) 00:11:27.987 fused_ordering(431) 00:11:27.987 fused_ordering(432) 00:11:27.987 fused_ordering(433) 00:11:27.987 fused_ordering(434) 00:11:27.987 fused_ordering(435) 00:11:27.987 fused_ordering(436) 00:11:27.987 fused_ordering(437) 00:11:27.987 fused_ordering(438) 00:11:27.987 fused_ordering(439) 00:11:27.987 fused_ordering(440) 00:11:27.987 fused_ordering(441) 00:11:27.987 fused_ordering(442) 00:11:27.987 fused_ordering(443) 00:11:27.987 fused_ordering(444) 00:11:27.987 fused_ordering(445) 00:11:27.987 fused_ordering(446) 00:11:27.987 fused_ordering(447) 00:11:27.987 fused_ordering(448) 00:11:27.987 fused_ordering(449) 00:11:27.987 fused_ordering(450) 00:11:27.987 fused_ordering(451) 00:11:27.987 fused_ordering(452) 00:11:27.987 fused_ordering(453) 00:11:27.987 fused_ordering(454) 00:11:27.987 fused_ordering(455) 00:11:27.987 fused_ordering(456) 00:11:27.987 fused_ordering(457) 00:11:27.987 fused_ordering(458) 00:11:27.987 fused_ordering(459) 00:11:27.987 fused_ordering(460) 00:11:27.987 fused_ordering(461) 00:11:27.987 fused_ordering(462) 00:11:27.987 fused_ordering(463) 00:11:27.987 fused_ordering(464) 00:11:27.987 fused_ordering(465) 00:11:27.987 fused_ordering(466) 00:11:27.987 fused_ordering(467) 00:11:27.987 fused_ordering(468) 00:11:27.987 fused_ordering(469) 00:11:27.987 fused_ordering(470) 00:11:27.987 fused_ordering(471) 00:11:27.987 fused_ordering(472) 00:11:27.987 fused_ordering(473) 00:11:27.987 fused_ordering(474) 00:11:27.987 fused_ordering(475) 00:11:27.987 fused_ordering(476) 00:11:27.987 fused_ordering(477) 00:11:27.987 fused_ordering(478) 00:11:27.987 fused_ordering(479) 00:11:27.987 fused_ordering(480) 00:11:27.987 fused_ordering(481) 00:11:27.987 fused_ordering(482) 00:11:27.987 fused_ordering(483) 00:11:27.987 fused_ordering(484) 00:11:27.987 fused_ordering(485) 00:11:27.987 fused_ordering(486) 00:11:27.987 fused_ordering(487) 00:11:27.987 fused_ordering(488) 00:11:27.987 fused_ordering(489) 00:11:27.987 fused_ordering(490) 00:11:27.987 fused_ordering(491) 00:11:27.987 fused_ordering(492) 00:11:27.987 fused_ordering(493) 00:11:27.987 fused_ordering(494) 00:11:27.987 fused_ordering(495) 00:11:27.987 fused_ordering(496) 00:11:27.987 fused_ordering(497) 00:11:27.987 fused_ordering(498) 00:11:27.987 fused_ordering(499) 00:11:27.987 fused_ordering(500) 00:11:27.987 fused_ordering(501) 00:11:27.987 fused_ordering(502) 00:11:27.987 fused_ordering(503) 00:11:27.987 fused_ordering(504) 00:11:27.987 fused_ordering(505) 00:11:27.987 fused_ordering(506) 00:11:27.987 fused_ordering(507) 00:11:27.987 fused_ordering(508) 00:11:27.987 fused_ordering(509) 00:11:27.987 fused_ordering(510) 00:11:27.987 fused_ordering(511) 00:11:27.987 fused_ordering(512) 00:11:27.987 fused_ordering(513) 00:11:27.987 fused_ordering(514) 00:11:27.987 fused_ordering(515) 00:11:27.987 fused_ordering(516) 00:11:27.987 fused_ordering(517) 00:11:27.987 fused_ordering(518) 00:11:27.987 fused_ordering(519) 00:11:27.987 fused_ordering(520) 00:11:27.988 fused_ordering(521) 00:11:27.988 fused_ordering(522) 00:11:27.988 fused_ordering(523) 00:11:27.988 fused_ordering(524) 00:11:27.988 fused_ordering(525) 00:11:27.988 fused_ordering(526) 00:11:27.988 fused_ordering(527) 00:11:27.988 fused_ordering(528) 00:11:27.988 fused_ordering(529) 00:11:27.988 fused_ordering(530) 00:11:27.988 fused_ordering(531) 00:11:27.988 fused_ordering(532) 00:11:27.988 fused_ordering(533) 00:11:27.988 fused_ordering(534) 00:11:27.988 fused_ordering(535) 00:11:27.988 fused_ordering(536) 00:11:27.988 fused_ordering(537) 00:11:27.988 fused_ordering(538) 00:11:27.988 fused_ordering(539) 00:11:27.988 fused_ordering(540) 00:11:27.988 fused_ordering(541) 00:11:27.988 fused_ordering(542) 00:11:27.988 fused_ordering(543) 00:11:27.988 fused_ordering(544) 00:11:27.988 fused_ordering(545) 00:11:27.988 fused_ordering(546) 00:11:27.988 fused_ordering(547) 00:11:27.988 fused_ordering(548) 00:11:27.988 fused_ordering(549) 00:11:27.988 fused_ordering(550) 00:11:27.988 fused_ordering(551) 00:11:27.988 fused_ordering(552) 00:11:27.988 fused_ordering(553) 00:11:27.988 fused_ordering(554) 00:11:27.988 fused_ordering(555) 00:11:27.988 fused_ordering(556) 00:11:27.988 fused_ordering(557) 00:11:27.988 fused_ordering(558) 00:11:27.988 fused_ordering(559) 00:11:27.988 fused_ordering(560) 00:11:27.988 fused_ordering(561) 00:11:27.988 fused_ordering(562) 00:11:27.988 fused_ordering(563) 00:11:27.988 fused_ordering(564) 00:11:27.988 fused_ordering(565) 00:11:27.988 fused_ordering(566) 00:11:27.988 fused_ordering(567) 00:11:27.988 fused_ordering(568) 00:11:27.988 fused_ordering(569) 00:11:27.988 fused_ordering(570) 00:11:27.988 fused_ordering(571) 00:11:27.988 fused_ordering(572) 00:11:27.988 fused_ordering(573) 00:11:27.988 fused_ordering(574) 00:11:27.988 fused_ordering(575) 00:11:27.988 fused_ordering(576) 00:11:27.988 fused_ordering(577) 00:11:27.988 fused_ordering(578) 00:11:27.988 fused_ordering(579) 00:11:27.988 fused_ordering(580) 00:11:27.988 fused_ordering(581) 00:11:27.988 fused_ordering(582) 00:11:27.988 fused_ordering(583) 00:11:27.988 fused_ordering(584) 00:11:27.988 fused_ordering(585) 00:11:27.988 fused_ordering(586) 00:11:27.988 fused_ordering(587) 00:11:27.988 fused_ordering(588) 00:11:27.988 fused_ordering(589) 00:11:27.988 fused_ordering(590) 00:11:27.988 fused_ordering(591) 00:11:27.988 fused_ordering(592) 00:11:27.988 fused_ordering(593) 00:11:27.988 fused_ordering(594) 00:11:27.988 fused_ordering(595) 00:11:27.988 fused_ordering(596) 00:11:27.988 fused_ordering(597) 00:11:27.988 fused_ordering(598) 00:11:27.988 fused_ordering(599) 00:11:27.988 fused_ordering(600) 00:11:27.988 fused_ordering(601) 00:11:27.988 fused_ordering(602) 00:11:27.988 fused_ordering(603) 00:11:27.988 fused_ordering(604) 00:11:27.988 fused_ordering(605) 00:11:27.988 fused_ordering(606) 00:11:27.988 fused_ordering(607) 00:11:27.988 fused_ordering(608) 00:11:27.988 fused_ordering(609) 00:11:27.988 fused_ordering(610) 00:11:27.988 fused_ordering(611) 00:11:27.988 fused_ordering(612) 00:11:27.988 fused_ordering(613) 00:11:27.988 fused_ordering(614) 00:11:27.988 fused_ordering(615) 00:11:28.247 fused_ordering(616) 00:11:28.247 fused_ordering(617) 00:11:28.247 fused_ordering(618) 00:11:28.247 fused_ordering(619) 00:11:28.247 fused_ordering(620) 00:11:28.247 fused_ordering(621) 00:11:28.247 fused_ordering(622) 00:11:28.247 fused_ordering(623) 00:11:28.247 fused_ordering(624) 00:11:28.247 fused_ordering(625) 00:11:28.247 fused_ordering(626) 00:11:28.247 fused_ordering(627) 00:11:28.247 fused_ordering(628) 00:11:28.247 fused_ordering(629) 00:11:28.247 fused_ordering(630) 00:11:28.247 fused_ordering(631) 00:11:28.247 fused_ordering(632) 00:11:28.247 fused_ordering(633) 00:11:28.247 fused_ordering(634) 00:11:28.247 fused_ordering(635) 00:11:28.247 fused_ordering(636) 00:11:28.247 fused_ordering(637) 00:11:28.247 fused_ordering(638) 00:11:28.247 fused_ordering(639) 00:11:28.247 fused_ordering(640) 00:11:28.247 fused_ordering(641) 00:11:28.247 fused_ordering(642) 00:11:28.247 fused_ordering(643) 00:11:28.247 fused_ordering(644) 00:11:28.247 fused_ordering(645) 00:11:28.247 fused_ordering(646) 00:11:28.247 fused_ordering(647) 00:11:28.247 fused_ordering(648) 00:11:28.247 fused_ordering(649) 00:11:28.247 fused_ordering(650) 00:11:28.247 fused_ordering(651) 00:11:28.247 fused_ordering(652) 00:11:28.247 fused_ordering(653) 00:11:28.247 fused_ordering(654) 00:11:28.247 fused_ordering(655) 00:11:28.247 fused_ordering(656) 00:11:28.247 fused_ordering(657) 00:11:28.247 fused_ordering(658) 00:11:28.247 fused_ordering(659) 00:11:28.247 fused_ordering(660) 00:11:28.247 fused_ordering(661) 00:11:28.247 fused_ordering(662) 00:11:28.247 fused_ordering(663) 00:11:28.247 fused_ordering(664) 00:11:28.247 fused_ordering(665) 00:11:28.247 fused_ordering(666) 00:11:28.247 fused_ordering(667) 00:11:28.247 fused_ordering(668) 00:11:28.247 fused_ordering(669) 00:11:28.247 fused_ordering(670) 00:11:28.247 fused_ordering(671) 00:11:28.247 fused_ordering(672) 00:11:28.247 fused_ordering(673) 00:11:28.247 fused_ordering(674) 00:11:28.247 fused_ordering(675) 00:11:28.247 fused_ordering(676) 00:11:28.247 fused_ordering(677) 00:11:28.247 fused_ordering(678) 00:11:28.247 fused_ordering(679) 00:11:28.247 fused_ordering(680) 00:11:28.247 fused_ordering(681) 00:11:28.247 fused_ordering(682) 00:11:28.247 fused_ordering(683) 00:11:28.247 fused_ordering(684) 00:11:28.247 fused_ordering(685) 00:11:28.247 fused_ordering(686) 00:11:28.247 fused_ordering(687) 00:11:28.247 fused_ordering(688) 00:11:28.247 fused_ordering(689) 00:11:28.247 fused_ordering(690) 00:11:28.247 fused_ordering(691) 00:11:28.247 fused_ordering(692) 00:11:28.247 fused_ordering(693) 00:11:28.247 fused_ordering(694) 00:11:28.247 fused_ordering(695) 00:11:28.247 fused_ordering(696) 00:11:28.247 fused_ordering(697) 00:11:28.247 fused_ordering(698) 00:11:28.247 fused_ordering(699) 00:11:28.247 fused_ordering(700) 00:11:28.247 fused_ordering(701) 00:11:28.247 fused_ordering(702) 00:11:28.247 fused_ordering(703) 00:11:28.247 fused_ordering(704) 00:11:28.247 fused_ordering(705) 00:11:28.247 fused_ordering(706) 00:11:28.247 fused_ordering(707) 00:11:28.247 fused_ordering(708) 00:11:28.247 fused_ordering(709) 00:11:28.247 fused_ordering(710) 00:11:28.247 fused_ordering(711) 00:11:28.247 fused_ordering(712) 00:11:28.247 fused_ordering(713) 00:11:28.247 fused_ordering(714) 00:11:28.247 fused_ordering(715) 00:11:28.247 fused_ordering(716) 00:11:28.247 fused_ordering(717) 00:11:28.247 fused_ordering(718) 00:11:28.247 fused_ordering(719) 00:11:28.247 fused_ordering(720) 00:11:28.247 fused_ordering(721) 00:11:28.247 fused_ordering(722) 00:11:28.247 fused_ordering(723) 00:11:28.247 fused_ordering(724) 00:11:28.247 fused_ordering(725) 00:11:28.247 fused_ordering(726) 00:11:28.247 fused_ordering(727) 00:11:28.247 fused_ordering(728) 00:11:28.247 fused_ordering(729) 00:11:28.247 fused_ordering(730) 00:11:28.247 fused_ordering(731) 00:11:28.247 fused_ordering(732) 00:11:28.247 fused_ordering(733) 00:11:28.247 fused_ordering(734) 00:11:28.247 fused_ordering(735) 00:11:28.247 fused_ordering(736) 00:11:28.247 fused_ordering(737) 00:11:28.247 fused_ordering(738) 00:11:28.247 fused_ordering(739) 00:11:28.247 fused_ordering(740) 00:11:28.247 fused_ordering(741) 00:11:28.247 fused_ordering(742) 00:11:28.247 fused_ordering(743) 00:11:28.247 fused_ordering(744) 00:11:28.247 fused_ordering(745) 00:11:28.247 fused_ordering(746) 00:11:28.247 fused_ordering(747) 00:11:28.247 fused_ordering(748) 00:11:28.247 fused_ordering(749) 00:11:28.247 fused_ordering(750) 00:11:28.247 fused_ordering(751) 00:11:28.247 fused_ordering(752) 00:11:28.247 fused_ordering(753) 00:11:28.247 fused_ordering(754) 00:11:28.247 fused_ordering(755) 00:11:28.247 fused_ordering(756) 00:11:28.247 fused_ordering(757) 00:11:28.247 fused_ordering(758) 00:11:28.247 fused_ordering(759) 00:11:28.247 fused_ordering(760) 00:11:28.247 fused_ordering(761) 00:11:28.247 fused_ordering(762) 00:11:28.247 fused_ordering(763) 00:11:28.247 fused_ordering(764) 00:11:28.247 fused_ordering(765) 00:11:28.247 fused_ordering(766) 00:11:28.247 fused_ordering(767) 00:11:28.247 fused_ordering(768) 00:11:28.247 fused_ordering(769) 00:11:28.247 fused_ordering(770) 00:11:28.247 fused_ordering(771) 00:11:28.247 fused_ordering(772) 00:11:28.247 fused_ordering(773) 00:11:28.247 fused_ordering(774) 00:11:28.247 fused_ordering(775) 00:11:28.247 fused_ordering(776) 00:11:28.247 fused_ordering(777) 00:11:28.247 fused_ordering(778) 00:11:28.247 fused_ordering(779) 00:11:28.247 fused_ordering(780) 00:11:28.247 fused_ordering(781) 00:11:28.247 fused_ordering(782) 00:11:28.247 fused_ordering(783) 00:11:28.247 fused_ordering(784) 00:11:28.247 fused_ordering(785) 00:11:28.247 fused_ordering(786) 00:11:28.247 fused_ordering(787) 00:11:28.247 fused_ordering(788) 00:11:28.248 fused_ordering(789) 00:11:28.248 fused_ordering(790) 00:11:28.248 fused_ordering(791) 00:11:28.248 fused_ordering(792) 00:11:28.248 fused_ordering(793) 00:11:28.248 fused_ordering(794) 00:11:28.248 fused_ordering(795) 00:11:28.248 fused_ordering(796) 00:11:28.248 fused_ordering(797) 00:11:28.248 fused_ordering(798) 00:11:28.248 fused_ordering(799) 00:11:28.248 fused_ordering(800) 00:11:28.248 fused_ordering(801) 00:11:28.248 fused_ordering(802) 00:11:28.248 fused_ordering(803) 00:11:28.248 fused_ordering(804) 00:11:28.248 fused_ordering(805) 00:11:28.248 fused_ordering(806) 00:11:28.248 fused_ordering(807) 00:11:28.248 fused_ordering(808) 00:11:28.248 fused_ordering(809) 00:11:28.248 fused_ordering(810) 00:11:28.248 fused_ordering(811) 00:11:28.248 fused_ordering(812) 00:11:28.248 fused_ordering(813) 00:11:28.248 fused_ordering(814) 00:11:28.248 fused_ordering(815) 00:11:28.248 fused_ordering(816) 00:11:28.248 fused_ordering(817) 00:11:28.248 fused_ordering(818) 00:11:28.248 fused_ordering(819) 00:11:28.248 fused_ordering(820) 00:11:28.507 fused_ordering(821) 00:11:28.507 fused_ordering(822) 00:11:28.507 fused_ordering(823) 00:11:28.507 fused_ordering(824) 00:11:28.507 fused_ordering(825) 00:11:28.507 fused_ordering(826) 00:11:28.508 fused_ordering(827) 00:11:28.508 fused_ordering(828) 00:11:28.508 fused_ordering(829) 00:11:28.508 fused_ordering(830) 00:11:28.508 fused_ordering(831) 00:11:28.508 fused_ordering(832) 00:11:28.508 fused_ordering(833) 00:11:28.508 fused_ordering(834) 00:11:28.508 fused_ordering(835) 00:11:28.508 fused_ordering(836) 00:11:28.508 fused_ordering(837) 00:11:28.508 fused_ordering(838) 00:11:28.508 fused_ordering(839) 00:11:28.508 fused_ordering(840) 00:11:28.508 fused_ordering(841) 00:11:28.508 fused_ordering(842) 00:11:28.508 fused_ordering(843) 00:11:28.508 fused_ordering(844) 00:11:28.508 fused_ordering(845) 00:11:28.508 fused_ordering(846) 00:11:28.508 fused_ordering(847) 00:11:28.508 fused_ordering(848) 00:11:28.508 fused_ordering(849) 00:11:28.508 fused_ordering(850) 00:11:28.508 fused_ordering(851) 00:11:28.508 fused_ordering(852) 00:11:28.508 fused_ordering(853) 00:11:28.508 fused_ordering(854) 00:11:28.508 fused_ordering(855) 00:11:28.508 fused_ordering(856) 00:11:28.508 fused_ordering(857) 00:11:28.508 fused_ordering(858) 00:11:28.508 fused_ordering(859) 00:11:28.508 fused_ordering(860) 00:11:28.508 fused_ordering(861) 00:11:28.508 fused_ordering(862) 00:11:28.508 fused_ordering(863) 00:11:28.508 fused_ordering(864) 00:11:28.508 fused_ordering(865) 00:11:28.508 fused_ordering(866) 00:11:28.508 fused_ordering(867) 00:11:28.508 fused_ordering(868) 00:11:28.508 fused_ordering(869) 00:11:28.508 fused_ordering(870) 00:11:28.508 fused_ordering(871) 00:11:28.508 fused_ordering(872) 00:11:28.508 fused_ordering(873) 00:11:28.508 fused_ordering(874) 00:11:28.508 fused_ordering(875) 00:11:28.508 fused_ordering(876) 00:11:28.508 fused_ordering(877) 00:11:28.508 fused_ordering(878) 00:11:28.508 fused_ordering(879) 00:11:28.508 fused_ordering(880) 00:11:28.508 fused_ordering(881) 00:11:28.508 fused_ordering(882) 00:11:28.508 fused_ordering(883) 00:11:28.508 fused_ordering(884) 00:11:28.508 fused_ordering(885) 00:11:28.508 fused_ordering(886) 00:11:28.508 fused_ordering(887) 00:11:28.508 fused_ordering(888) 00:11:28.508 fused_ordering(889) 00:11:28.508 fused_ordering(890) 00:11:28.508 fused_ordering(891) 00:11:28.508 fused_ordering(892) 00:11:28.508 fused_ordering(893) 00:11:28.508 fused_ordering(894) 00:11:28.508 fused_ordering(895) 00:11:28.508 fused_ordering(896) 00:11:28.508 fused_ordering(897) 00:11:28.508 fused_ordering(898) 00:11:28.508 fused_ordering(899) 00:11:28.508 fused_ordering(900) 00:11:28.508 fused_ordering(901) 00:11:28.508 fused_ordering(902) 00:11:28.508 fused_ordering(903) 00:11:28.508 fused_ordering(904) 00:11:28.508 fused_ordering(905) 00:11:28.508 fused_ordering(906) 00:11:28.508 fused_ordering(907) 00:11:28.508 fused_ordering(908) 00:11:28.508 fused_ordering(909) 00:11:28.508 fused_ordering(910) 00:11:28.508 fused_ordering(911) 00:11:28.508 fused_ordering(912) 00:11:28.508 fused_ordering(913) 00:11:28.508 fused_ordering(914) 00:11:28.508 fused_ordering(915) 00:11:28.508 fused_ordering(916) 00:11:28.508 fused_ordering(917) 00:11:28.508 fused_ordering(918) 00:11:28.508 fused_ordering(919) 00:11:28.508 fused_ordering(920) 00:11:28.508 fused_ordering(921) 00:11:28.508 fused_ordering(922) 00:11:28.508 fused_ordering(923) 00:11:28.508 fused_ordering(924) 00:11:28.508 fused_ordering(925) 00:11:28.508 fused_ordering(926) 00:11:28.508 fused_ordering(927) 00:11:28.508 fused_ordering(928) 00:11:28.508 fused_ordering(929) 00:11:28.508 fused_ordering(930) 00:11:28.508 fused_ordering(931) 00:11:28.508 fused_ordering(932) 00:11:28.508 fused_ordering(933) 00:11:28.508 fused_ordering(934) 00:11:28.508 fused_ordering(935) 00:11:28.508 fused_ordering(936) 00:11:28.508 fused_ordering(937) 00:11:28.508 fused_ordering(938) 00:11:28.508 fused_ordering(939) 00:11:28.508 fused_ordering(940) 00:11:28.508 fused_ordering(941) 00:11:28.508 fused_ordering(942) 00:11:28.508 fused_ordering(943) 00:11:28.508 fused_ordering(944) 00:11:28.508 fused_ordering(945) 00:11:28.508 fused_ordering(946) 00:11:28.508 fused_ordering(947) 00:11:28.508 fused_ordering(948) 00:11:28.508 fused_ordering(949) 00:11:28.508 fused_ordering(950) 00:11:28.508 fused_ordering(951) 00:11:28.508 fused_ordering(952) 00:11:28.508 fused_ordering(953) 00:11:28.508 fused_ordering(954) 00:11:28.508 fused_ordering(955) 00:11:28.508 fused_ordering(956) 00:11:28.508 fused_ordering(957) 00:11:28.508 fused_ordering(958) 00:11:28.508 fused_ordering(959) 00:11:28.508 fused_ordering(960) 00:11:28.508 fused_ordering(961) 00:11:28.508 fused_ordering(962) 00:11:28.508 fused_ordering(963) 00:11:28.508 fused_ordering(964) 00:11:28.508 fused_ordering(965) 00:11:28.508 fused_ordering(966) 00:11:28.508 fused_ordering(967) 00:11:28.508 fused_ordering(968) 00:11:28.508 fused_ordering(969) 00:11:28.508 fused_ordering(970) 00:11:28.508 fused_ordering(971) 00:11:28.508 fused_ordering(972) 00:11:28.508 fused_ordering(973) 00:11:28.508 fused_ordering(974) 00:11:28.508 fused_ordering(975) 00:11:28.508 fused_ordering(976) 00:11:28.508 fused_ordering(977) 00:11:28.508 fused_ordering(978) 00:11:28.508 fused_ordering(979) 00:11:28.508 fused_ordering(980) 00:11:28.508 fused_ordering(981) 00:11:28.508 fused_ordering(982) 00:11:28.508 fused_ordering(983) 00:11:28.508 fused_ordering(984) 00:11:28.508 fused_ordering(985) 00:11:28.508 fused_ordering(986) 00:11:28.508 fused_ordering(987) 00:11:28.508 fused_ordering(988) 00:11:28.508 fused_ordering(989) 00:11:28.508 fused_ordering(990) 00:11:28.508 fused_ordering(991) 00:11:28.508 fused_ordering(992) 00:11:28.508 fused_ordering(993) 00:11:28.508 fused_ordering(994) 00:11:28.508 fused_ordering(995) 00:11:28.508 fused_ordering(996) 00:11:28.508 fused_ordering(997) 00:11:28.508 fused_ordering(998) 00:11:28.508 fused_ordering(999) 00:11:28.508 fused_ordering(1000) 00:11:28.508 fused_ordering(1001) 00:11:28.508 fused_ordering(1002) 00:11:28.508 fused_ordering(1003) 00:11:28.508 fused_ordering(1004) 00:11:28.508 fused_ordering(1005) 00:11:28.508 fused_ordering(1006) 00:11:28.508 fused_ordering(1007) 00:11:28.508 fused_ordering(1008) 00:11:28.508 fused_ordering(1009) 00:11:28.508 fused_ordering(1010) 00:11:28.508 fused_ordering(1011) 00:11:28.508 fused_ordering(1012) 00:11:28.508 fused_ordering(1013) 00:11:28.508 fused_ordering(1014) 00:11:28.508 fused_ordering(1015) 00:11:28.508 fused_ordering(1016) 00:11:28.508 fused_ordering(1017) 00:11:28.508 fused_ordering(1018) 00:11:28.508 fused_ordering(1019) 00:11:28.508 fused_ordering(1020) 00:11:28.508 fused_ordering(1021) 00:11:28.508 fused_ordering(1022) 00:11:28.508 fused_ordering(1023) 00:11:28.508 12:55:06 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:28.508 12:55:06 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:28.508 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.508 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:28.508 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:28.509 rmmod nvme_rdma 00:11:28.509 rmmod nvme_fabrics 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3552271 ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3552271 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3552271 ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3552271 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3552271 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3552271' 00:11:28.509 killing process with pid 3552271 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3552271 00:11:28.509 [2024-05-15 12:55:06.270499] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:28.509 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3552271 00:11:28.509 [2024-05-15 12:55:06.315443] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:28.769 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.769 12:55:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:28.769 00:11:28.769 real 0m8.094s 00:11:28.769 user 0m4.512s 00:11:28.769 sys 0m4.893s 00:11:28.769 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.769 12:55:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.769 ************************************ 00:11:28.769 END TEST nvmf_fused_ordering 00:11:28.769 ************************************ 00:11:28.769 12:55:06 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:28.769 12:55:06 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:28.769 12:55:06 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.769 12:55:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:28.769 ************************************ 00:11:28.769 START TEST nvmf_delete_subsystem 00:11:28.769 ************************************ 00:11:28.769 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:29.029 * Looking for test storage... 00:11:29.029 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.029 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.030 12:55:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:35.600 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:35.600 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:35.600 Found net devices under 0000:18:00.0: mlx_0_0 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:35.600 Found net devices under 0000:18:00.1: mlx_0_1 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.600 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:35.601 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.601 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:35.601 altname enp24s0f0np0 00:11:35.601 altname ens785f0np0 00:11:35.601 inet 192.168.100.8/24 scope global mlx_0_0 00:11:35.601 valid_lft forever preferred_lft forever 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:35.601 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.601 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:35.601 altname enp24s0f1np1 00:11:35.601 altname ens785f1np1 00:11:35.601 inet 192.168.100.9/24 scope global mlx_0_1 00:11:35.601 valid_lft forever preferred_lft forever 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:35.601 192.168.100.9' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:35.601 192.168.100.9' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:35.601 192.168.100.9' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3555364 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3555364 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3555364 ']' 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:35.601 12:55:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.601 [2024-05-15 12:55:12.828867] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:11:35.601 [2024-05-15 12:55:12.828925] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.601 [2024-05-15 12:55:12.903521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.601 [2024-05-15 12:55:12.996234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.601 [2024-05-15 12:55:12.996276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.601 [2024-05-15 12:55:12.996285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.601 [2024-05-15 12:55:12.996294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.601 [2024-05-15 12:55:12.996302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.601 [2024-05-15 12:55:12.999077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.601 [2024-05-15 12:55:12.999081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.861 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.861 [2024-05-15 12:55:13.701434] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc8da90/0xc91f80) succeed. 00:11:35.862 [2024-05-15 12:55:13.710448] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc8ef90/0xcd3610) succeed. 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.121 [2024-05-15 12:55:13.794240] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:36.121 [2024-05-15 12:55:13.794557] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.121 NULL1 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.121 Delay0 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3555564 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:36.121 12:55:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.121 [2024-05-15 12:55:13.901327] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:38.028 12:55:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.028 12:55:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.028 12:55:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 NVMe io qpair process completion error 00:11:39.407 12:55:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.407 12:55:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:39.407 12:55:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3555564 00:11:39.407 12:55:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:39.665 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:39.665 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3555564 00:11:39.665 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 starting I/O failed: -6 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Write completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.234 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 starting I/O failed: -6 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.235 Write completed with error (sct=0, sc=8) 00:11:40.235 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Write completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Read completed with error (sct=0, sc=8) 00:11:40.236 Initializing NVMe Controllers 00:11:40.236 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.236 Controller IO queue size 128, less than required. 00:11:40.236 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:40.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:40.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:40.236 Initialization complete. Launching workers. 00:11:40.236 ======================================================== 00:11:40.236 Latency(us) 00:11:40.236 Device Information : IOPS MiB/s Average min max 00:11:40.236 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.29 0.04 1596711.74 1000272.22 2985954.69 00:11:40.236 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.29 0.04 1598198.33 1001017.86 2987308.78 00:11:40.236 ======================================================== 00:11:40.236 Total : 160.57 0.08 1597455.04 1000272.22 2987308.78 00:11:40.236 00:11:40.236 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:40.236 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3555564 00:11:40.236 12:55:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:40.236 [2024-05-15 12:55:17.998534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:40.236 [2024-05-15 12:55:17.998577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:40.236 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3555564 00:11:40.804 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3555564) - No such process 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3555564 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3555564 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3555564 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 [2024-05-15 12:55:18.522219] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3556120 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:40.804 12:55:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.804 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.804 [2024-05-15 12:55:18.615651] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:41.371 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.371 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:41.371 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.940 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.940 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:41.940 12:55:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.199 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.199 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:42.200 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.768 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.768 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:42.768 12:55:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.335 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.335 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:43.335 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.904 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.904 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:43.904 12:55:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.472 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.472 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:44.472 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.731 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.731 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:44.731 12:55:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.300 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:45.300 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:45.300 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.866 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:45.866 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:45.866 12:55:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.434 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.434 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:46.434 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.003 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.003 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:47.003 12:55:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.289 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.289 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:47.289 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.924 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.924 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:47.924 12:55:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.924 Initializing NVMe Controllers 00:11:47.924 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.924 Controller IO queue size 128, less than required. 00:11:47.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:47.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:47.924 Initialization complete. Launching workers. 00:11:47.924 ======================================================== 00:11:47.924 Latency(us) 00:11:47.924 Device Information : IOPS MiB/s Average min max 00:11:47.924 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001370.40 1000068.12 1004133.14 00:11:47.924 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002625.68 1000171.46 1006249.39 00:11:47.924 ======================================================== 00:11:47.924 Total : 256.00 0.12 1001998.04 1000068.12 1006249.39 00:11:47.924 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556120 00:11:48.495 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3556120) - No such process 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3556120 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:48.495 rmmod nvme_rdma 00:11:48.495 rmmod nvme_fabrics 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3555364 ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3555364 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3555364 ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3555364 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3555364 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3555364' 00:11:48.495 killing process with pid 3555364 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3555364 00:11:48.495 [2024-05-15 12:55:26.222379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:48.495 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3555364 00:11:48.495 [2024-05-15 12:55:26.278311] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:48.755 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.755 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:48.755 00:11:48.755 real 0m19.886s 00:11:48.755 user 0m49.995s 00:11:48.755 sys 0m5.851s 00:11:48.755 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.755 12:55:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 ************************************ 00:11:48.755 END TEST nvmf_delete_subsystem 00:11:48.755 ************************************ 00:11:48.755 12:55:26 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:48.755 12:55:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:48.755 12:55:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.755 12:55:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:48.755 ************************************ 00:11:48.755 START TEST nvmf_ns_masking 00:11:48.755 ************************************ 00:11:48.755 12:55:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:49.017 * Looking for test storage... 00:11:49.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=4d659450-f0cc-4666-8f0b-2e8fbdd6f397 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.017 12:55:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:55.591 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:55.591 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:55.591 Found net devices under 0000:18:00.0: mlx_0_0 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:55.591 Found net devices under 0000:18:00.1: mlx_0_1 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:55.591 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:55.591 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.591 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:55.591 altname enp24s0f0np0 00:11:55.591 altname ens785f0np0 00:11:55.592 inet 192.168.100.8/24 scope global mlx_0_0 00:11:55.592 valid_lft forever preferred_lft forever 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:55.592 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.592 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:55.592 altname enp24s0f1np1 00:11:55.592 altname ens785f1np1 00:11:55.592 inet 192.168.100.9/24 scope global mlx_0_1 00:11:55.592 valid_lft forever preferred_lft forever 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:55.592 192.168.100.9' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:55.592 192.168.100.9' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:55.592 192.168.100.9' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3560006 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3560006 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3560006 ']' 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.592 12:55:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.592 [2024-05-15 12:55:32.683318] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:11:55.592 [2024-05-15 12:55:32.683378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.592 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.592 [2024-05-15 12:55:32.757905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.592 [2024-05-15 12:55:32.843915] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.592 [2024-05-15 12:55:32.843960] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.592 [2024-05-15 12:55:32.843970] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.592 [2024-05-15 12:55:32.843994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.592 [2024-05-15 12:55:32.844001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.592 [2024-05-15 12:55:32.844065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.592 [2024-05-15 12:55:32.844132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.592 [2024-05-15 12:55:32.844212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.592 [2024-05-15 12:55:32.844214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.850 12:55:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:55.850 [2024-05-15 12:55:33.724332] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24d60f0/0x24da5e0) succeed. 00:11:56.108 [2024-05-15 12:55:33.734857] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24d7730/0x251bc70) succeed. 00:11:56.108 12:55:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:56.108 12:55:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:56.108 12:55:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:56.367 Malloc1 00:11:56.367 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:56.626 Malloc2 00:11:56.626 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.626 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:56.884 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:57.143 [2024-05-15 12:55:34.822822] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:57.143 [2024-05-15 12:55:34.823258] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:57.143 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:57.143 12:55:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d659450-f0cc-4666-8f0b-2e8fbdd6f397 -a 192.168.100.8 -s 4420 -i 4 00:11:57.403 12:55:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.403 12:55:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:57.403 12:55:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.403 12:55:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:57.403 12:55:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:59.310 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:59.569 [ 0]:0x1 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2a2e8f4b5d0247be99a950ebbabcc804 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2a2e8f4b5d0247be99a950ebbabcc804 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.569 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:59.828 [ 0]:0x1 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2a2e8f4b5d0247be99a950ebbabcc804 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2a2e8f4b5d0247be99a950ebbabcc804 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:59.828 [ 1]:0x2 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:59.828 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.087 12:55:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.346 12:55:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:00.606 12:55:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:00.606 12:55:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d659450-f0cc-4666-8f0b-2e8fbdd6f397 -a 192.168.100.8 -s 4420 -i 4 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:12:00.865 12:55:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:02.772 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:03.031 [ 0]:0x2 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.031 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.290 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:03.290 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.290 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:03.290 [ 0]:0x1 00:12:03.290 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.290 12:55:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2a2e8f4b5d0247be99a950ebbabcc804 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2a2e8f4b5d0247be99a950ebbabcc804 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:03.290 [ 1]:0x2 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.290 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.549 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:03.550 [ 0]:0x2 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:03.550 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.809 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:04.070 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:04.070 12:55:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d659450-f0cc-4666-8f0b-2e8fbdd6f397 -a 192.168.100.8 -s 4420 -i 4 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:04.330 12:55:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:06.868 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.869 [ 0]:0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2a2e8f4b5d0247be99a950ebbabcc804 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2a2e8f4b5d0247be99a950ebbabcc804 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:06.869 [ 1]:0x2 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:06.869 [ 0]:0x2 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.869 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:07.128 [2024-05-15 12:55:44.789845] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:07.128 request: 00:12:07.128 { 00:12:07.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.129 "nsid": 2, 00:12:07.129 "host": "nqn.2016-06.io.spdk:host1", 00:12:07.129 "method": "nvmf_ns_remove_host", 00:12:07.129 "req_id": 1 00:12:07.129 } 00:12:07.129 Got JSON-RPC error response 00:12:07.129 response: 00:12:07.129 { 00:12:07.129 "code": -32602, 00:12:07.129 "message": "Invalid parameters" 00:12:07.129 } 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.129 [ 0]:0x2 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5beb884f048944d6a6aea351f44569eb 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5beb884f048944d6a6aea351f44569eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:07.129 12:55:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.388 12:55:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:07.647 rmmod nvme_rdma 00:12:07.647 rmmod nvme_fabrics 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3560006 ']' 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3560006 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3560006 ']' 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3560006 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:07.647 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3560006 00:12:07.906 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:07.906 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:07.906 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3560006' 00:12:07.906 killing process with pid 3560006 00:12:07.906 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3560006 00:12:07.906 [2024-05-15 12:55:45.534289] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:07.906 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3560006 00:12:07.906 [2024-05-15 12:55:45.617658] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:08.166 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.166 12:55:45 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:08.166 00:12:08.166 real 0m19.292s 00:12:08.166 user 0m56.431s 00:12:08.166 sys 0m6.000s 00:12:08.166 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:08.166 12:55:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:08.166 ************************************ 00:12:08.166 END TEST nvmf_ns_masking 00:12:08.166 ************************************ 00:12:08.166 12:55:45 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:08.166 12:55:45 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:08.166 12:55:45 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:08.166 12:55:45 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:08.166 12:55:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:08.166 ************************************ 00:12:08.166 START TEST nvmf_nvme_cli 00:12:08.166 ************************************ 00:12:08.166 12:55:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:08.166 * Looking for test storage... 00:12:08.426 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.426 12:55:46 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:13.703 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:13.703 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:13.703 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:13.704 Found net devices under 0000:18:00.0: mlx_0_0 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:13.704 Found net devices under 0000:18:00.1: mlx_0_1 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.704 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:13.964 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.964 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:13.964 altname enp24s0f0np0 00:12:13.964 altname ens785f0np0 00:12:13.964 inet 192.168.100.8/24 scope global mlx_0_0 00:12:13.964 valid_lft forever preferred_lft forever 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:13.964 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.964 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:13.964 altname enp24s0f1np1 00:12:13.964 altname ens785f1np1 00:12:13.964 inet 192.168.100.9/24 scope global mlx_0_1 00:12:13.964 valid_lft forever preferred_lft forever 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:13.964 192.168.100.9' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:13.964 192.168.100.9' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:13.964 192.168.100.9' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3564649 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3564649 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3564649 ']' 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:13.964 12:55:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.964 [2024-05-15 12:55:51.785006] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:13.964 [2024-05-15 12:55:51.785084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.964 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.223 [2024-05-15 12:55:51.860265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.223 [2024-05-15 12:55:51.952294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.223 [2024-05-15 12:55:51.952338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.223 [2024-05-15 12:55:51.952347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.223 [2024-05-15 12:55:51.952356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.223 [2024-05-15 12:55:51.952363] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.223 [2024-05-15 12:55:51.952454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.223 [2024-05-15 12:55:51.952556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.223 [2024-05-15 12:55:51.952640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.223 [2024-05-15 12:55:51.952641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.791 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 [2024-05-15 12:55:52.681642] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe2b0f0/0xe2f5e0) succeed. 00:12:15.051 [2024-05-15 12:55:52.692002] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe2c730/0xe70c70) succeed. 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 Malloc0 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 Malloc1 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 [2024-05-15 12:55:52.903221] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:15.051 [2024-05-15 12:55:52.903573] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.051 12:55:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:15.311 00:12:15.311 Discovery Log Number of Records 2, Generation counter 2 00:12:15.311 =====Discovery Log Entry 0====== 00:12:15.311 trtype: rdma 00:12:15.311 adrfam: ipv4 00:12:15.311 subtype: current discovery subsystem 00:12:15.311 treq: not required 00:12:15.311 portid: 0 00:12:15.311 trsvcid: 4420 00:12:15.311 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.311 traddr: 192.168.100.8 00:12:15.311 eflags: explicit discovery connections, duplicate discovery information 00:12:15.311 rdma_prtype: not specified 00:12:15.311 rdma_qptype: connected 00:12:15.311 rdma_cms: rdma-cm 00:12:15.311 rdma_pkey: 0x0000 00:12:15.311 =====Discovery Log Entry 1====== 00:12:15.311 trtype: rdma 00:12:15.311 adrfam: ipv4 00:12:15.311 subtype: nvme subsystem 00:12:15.311 treq: not required 00:12:15.311 portid: 0 00:12:15.311 trsvcid: 4420 00:12:15.311 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.311 traddr: 192.168.100.8 00:12:15.311 eflags: none 00:12:15.311 rdma_prtype: not specified 00:12:15.311 rdma_qptype: connected 00:12:15.311 rdma_cms: rdma-cm 00:12:15.311 rdma_pkey: 0x0000 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:15.311 12:55:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:16.248 12:55:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:12:18.200 12:55:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:18.200 12:55:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:18.200 12:55:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:18.200 /dev/nvme0n1 ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:18.200 12:55:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.201 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:18.201 12:55:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:19.579 rmmod nvme_rdma 00:12:19.579 rmmod nvme_fabrics 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3564649 ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3564649 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3564649 ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3564649 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3564649 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3564649' 00:12:19.579 killing process with pid 3564649 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3564649 00:12:19.579 [2024-05-15 12:55:57.212349] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:19.579 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3564649 00:12:19.579 [2024-05-15 12:55:57.297598] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:19.838 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.838 12:55:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:19.838 00:12:19.838 real 0m11.593s 00:12:19.838 user 0m23.698s 00:12:19.838 sys 0m4.965s 00:12:19.838 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.838 12:55:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.838 ************************************ 00:12:19.838 END TEST nvmf_nvme_cli 00:12:19.838 ************************************ 00:12:19.838 12:55:57 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:19.838 12:55:57 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:19.838 12:55:57 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.838 12:55:57 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.838 12:55:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:19.838 ************************************ 00:12:19.838 START TEST nvmf_host_management 00:12:19.838 ************************************ 00:12:19.838 12:55:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:20.098 * Looking for test storage... 00:12:20.098 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.098 12:55:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.376 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:25.377 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:25.377 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:25.377 Found net devices under 0000:18:00.0: mlx_0_0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:25.377 Found net devices under 0000:18:00.1: mlx_0_1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:25.377 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:25.377 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:25.377 altname enp24s0f0np0 00:12:25.377 altname ens785f0np0 00:12:25.377 inet 192.168.100.8/24 scope global mlx_0_0 00:12:25.377 valid_lft forever preferred_lft forever 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:25.377 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:25.377 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:25.377 altname enp24s0f1np1 00:12:25.377 altname ens785f1np1 00:12:25.377 inet 192.168.100.9/24 scope global mlx_0_1 00:12:25.377 valid_lft forever preferred_lft forever 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:25.377 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.637 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:25.638 192.168.100.9' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:25.638 192.168.100.9' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:25.638 192.168.100.9' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3568283 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3568283 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3568283 ']' 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.638 12:56:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.638 [2024-05-15 12:56:03.373446] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:25.638 [2024-05-15 12:56:03.373507] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.638 [2024-05-15 12:56:03.443819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.895 [2024-05-15 12:56:03.524846] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.896 [2024-05-15 12:56:03.524889] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.896 [2024-05-15 12:56:03.524899] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.896 [2024-05-15 12:56:03.524923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.896 [2024-05-15 12:56:03.524930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.896 [2024-05-15 12:56:03.525027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.896 [2024-05-15 12:56:03.525106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.896 [2024-05-15 12:56:03.525208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.896 [2024-05-15 12:56:03.525209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.462 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.462 [2024-05-15 12:56:04.278178] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf9d3e0/0xfa18d0) succeed. 00:12:26.462 [2024-05-15 12:56:04.288584] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf9ea20/0xfe2f60) succeed. 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.721 Malloc0 00:12:26.721 [2024-05-15 12:56:04.478545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:26.721 [2024-05-15 12:56:04.478886] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3568413 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3568413 /var/tmp/bdevperf.sock 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3568413 ']' 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:26.721 { 00:12:26.721 "params": { 00:12:26.721 "name": "Nvme$subsystem", 00:12:26.721 "trtype": "$TEST_TRANSPORT", 00:12:26.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.721 "adrfam": "ipv4", 00:12:26.721 "trsvcid": "$NVMF_PORT", 00:12:26.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.721 "hdgst": ${hdgst:-false}, 00:12:26.721 "ddgst": ${ddgst:-false} 00:12:26.721 }, 00:12:26.721 "method": "bdev_nvme_attach_controller" 00:12:26.721 } 00:12:26.721 EOF 00:12:26.721 )") 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:26.721 12:56:04 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:26.721 "params": { 00:12:26.721 "name": "Nvme0", 00:12:26.721 "trtype": "rdma", 00:12:26.721 "traddr": "192.168.100.8", 00:12:26.721 "adrfam": "ipv4", 00:12:26.721 "trsvcid": "4420", 00:12:26.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:26.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:26.721 "hdgst": false, 00:12:26.721 "ddgst": false 00:12:26.721 }, 00:12:26.721 "method": "bdev_nvme_attach_controller" 00:12:26.721 }' 00:12:26.721 [2024-05-15 12:56:04.582986] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:26.721 [2024-05-15 12:56:04.583047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568413 ] 00:12:26.980 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.980 [2024-05-15 12:56:04.659154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.980 [2024-05-15 12:56:04.742568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.239 Running I/O for 10 seconds... 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1580 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1580 -ge 100 ']' 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:27.806 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.807 12:56:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:28.746 [2024-05-15 12:56:06.509868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.509904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.509924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.509934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.509946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.509956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.509967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.509977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.509988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.509998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.510010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.510019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.510030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.510039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.510050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.510064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.746 [2024-05-15 12:56:06.510075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:12:28.746 [2024-05-15 12:56:06.510093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:12:28.747 [2024-05-15 12:56:06.510113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:12:28.747 [2024-05-15 12:56:06.510134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:12:28.747 [2024-05-15 12:56:06.510154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:12:28.747 [2024-05-15 12:56:06.510175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:12:28.747 [2024-05-15 12:56:06.510195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:12:28.747 [2024-05-15 12:56:06.510483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:12:28.747 [2024-05-15 12:56:06.510754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182400 00:12:28.747 [2024-05-15 12:56:06.510774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x182400 00:12:28.747 [2024-05-15 12:56:06.510796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:12:28.747 [2024-05-15 12:56:06.510816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:12:28.747 [2024-05-15 12:56:06.510836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:12:28.747 [2024-05-15 12:56:06.510856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.747 [2024-05-15 12:56:06.510867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df2f000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df0e000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.510990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deed000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.510999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000decc000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deab000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de8a000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de69000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de06000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dde5000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ddc4000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dda3000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd82000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd61000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.511219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd40000 len:0x10000 key:0x182400 00:12:28.748 [2024-05-15 12:56:06.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:12:28.748 [2024-05-15 12:56:06.513140] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:12:28.748 [2024-05-15 12:56:06.514060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:28.748 task offset: 90112 on job bdev=Nvme0n1 fails 00:12:28.748 00:12:28.748 Latency(us) 00:12:28.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.748 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:28.748 Job: Nvme0n1 ended in about 1.59 seconds with error 00:12:28.748 Verification LBA range: start 0x0 length 0x400 00:12:28.748 Nvme0n1 : 1.59 1072.22 67.01 40.25 0.00 56986.73 2208.28 1021221.84 00:12:28.748 =================================================================================================================== 00:12:28.748 Total : 1072.22 67.01 40.25 0.00 56986.73 2208.28 1021221.84 00:12:28.748 [2024-05-15 12:56:06.515781] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3568413 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:28.748 { 00:12:28.748 "params": { 00:12:28.748 "name": "Nvme$subsystem", 00:12:28.748 "trtype": "$TEST_TRANSPORT", 00:12:28.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:28.748 "adrfam": "ipv4", 00:12:28.748 "trsvcid": "$NVMF_PORT", 00:12:28.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:28.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:28.748 "hdgst": ${hdgst:-false}, 00:12:28.748 "ddgst": ${ddgst:-false} 00:12:28.748 }, 00:12:28.748 "method": "bdev_nvme_attach_controller" 00:12:28.748 } 00:12:28.748 EOF 00:12:28.748 )") 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:28.748 12:56:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:28.748 "params": { 00:12:28.748 "name": "Nvme0", 00:12:28.748 "trtype": "rdma", 00:12:28.748 "traddr": "192.168.100.8", 00:12:28.748 "adrfam": "ipv4", 00:12:28.748 "trsvcid": "4420", 00:12:28.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:28.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:28.748 "hdgst": false, 00:12:28.748 "ddgst": false 00:12:28.748 }, 00:12:28.748 "method": "bdev_nvme_attach_controller" 00:12:28.748 }' 00:12:28.748 [2024-05-15 12:56:06.576103] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:28.748 [2024-05-15 12:56:06.576159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568743 ] 00:12:28.748 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.008 [2024-05-15 12:56:06.647825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.008 [2024-05-15 12:56:06.730470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.267 Running I/O for 1 seconds... 00:12:30.205 00:12:30.205 Latency(us) 00:12:30.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.205 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:30.205 Verification LBA range: start 0x0 length 0x400 00:12:30.205 Nvme0n1 : 1.01 3026.30 189.14 0.00 0.00 20711.68 901.12 43538.70 00:12:30.205 =================================================================================================================== 00:12:30.205 Total : 3026.30 189.14 0.00 0.00 20711.68 901.12 43538.70 00:12:30.464 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3568413 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:30.464 rmmod nvme_rdma 00:12:30.464 rmmod nvme_fabrics 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3568283 ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3568283 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3568283 ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3568283 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3568283 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3568283' 00:12:30.464 killing process with pid 3568283 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3568283 00:12:30.464 [2024-05-15 12:56:08.256312] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:30.464 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3568283 00:12:30.464 [2024-05-15 12:56:08.338312] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:30.723 [2024-05-15 12:56:08.562348] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:30.723 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.723 12:56:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:30.723 12:56:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:30.723 00:12:30.723 real 0m10.917s 00:12:30.723 user 0m24.962s 00:12:30.723 sys 0m5.369s 00:12:30.723 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:30.723 12:56:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.723 ************************************ 00:12:30.723 END TEST nvmf_host_management 00:12:30.723 ************************************ 00:12:30.983 12:56:08 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:30.983 12:56:08 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:30.983 12:56:08 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:30.983 12:56:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:30.983 ************************************ 00:12:30.983 START TEST nvmf_lvol 00:12:30.983 ************************************ 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:30.983 * Looking for test storage... 00:12:30.983 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.983 12:56:08 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:37.563 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:37.563 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:37.563 Found net devices under 0000:18:00.0: mlx_0_0 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:37.563 Found net devices under 0000:18:00.1: mlx_0_1 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.563 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:37.564 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.564 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:37.564 altname enp24s0f0np0 00:12:37.564 altname ens785f0np0 00:12:37.564 inet 192.168.100.8/24 scope global mlx_0_0 00:12:37.564 valid_lft forever preferred_lft forever 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:37.564 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.564 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:37.564 altname enp24s0f1np1 00:12:37.564 altname ens785f1np1 00:12:37.564 inet 192.168.100.9/24 scope global mlx_0_1 00:12:37.564 valid_lft forever preferred_lft forever 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:37.564 192.168.100.9' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:37.564 192.168.100.9' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:37.564 192.168.100.9' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3571827 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3571827 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3571827 ']' 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.564 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:37.565 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.565 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:37.565 12:56:14 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.565 [2024-05-15 12:56:14.861567] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:37.565 [2024-05-15 12:56:14.861625] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.565 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.565 [2024-05-15 12:56:14.933662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.565 [2024-05-15 12:56:15.025921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.565 [2024-05-15 12:56:15.025963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.565 [2024-05-15 12:56:15.025973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.565 [2024-05-15 12:56:15.025997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.565 [2024-05-15 12:56:15.026004] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.565 [2024-05-15 12:56:15.026053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.565 [2024-05-15 12:56:15.026150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.565 [2024-05-15 12:56:15.026153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.825 12:56:15 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.825 12:56:15 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:12:37.825 12:56:15 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.825 12:56:15 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.825 12:56:15 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.085 12:56:15 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.085 12:56:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:38.085 [2024-05-15 12:56:15.900489] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d54580/0x1d58a70) succeed. 00:12:38.085 [2024-05-15 12:56:15.910985] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d55b20/0x1d9a100) succeed. 00:12:38.344 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:38.603 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:38.603 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:38.603 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:38.603 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:38.862 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:39.122 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=85eb56a3-ff69-4294-812e-f5a8efc7d663 00:12:39.122 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85eb56a3-ff69-4294-812e-f5a8efc7d663 lvol 20 00:12:39.122 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ac9c1cab-5866-4b84-b7f5-831ef04d951a 00:12:39.122 12:56:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:39.380 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac9c1cab-5866-4b84-b7f5-831ef04d951a 00:12:39.640 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:39.640 [2024-05-15 12:56:17.520614] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:39.640 [2024-05-15 12:56:17.520961] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:39.899 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:39.899 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3572229 00:12:39.899 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:39.899 12:56:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:39.899 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.278 12:56:18 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ac9c1cab-5866-4b84-b7f5-831ef04d951a MY_SNAPSHOT 00:12:41.278 12:56:18 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=69956583-cdaa-489c-9965-3256a95dbe7a 00:12:41.278 12:56:18 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ac9c1cab-5866-4b84-b7f5-831ef04d951a 30 00:12:41.278 12:56:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 69956583-cdaa-489c-9965-3256a95dbe7a MY_CLONE 00:12:41.538 12:56:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c6e8a057-52c0-454e-8ada-8cc8c7b3e8f8 00:12:41.538 12:56:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c6e8a057-52c0-454e-8ada-8cc8c7b3e8f8 00:12:41.797 12:56:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3572229 00:12:51.784 Initializing NVMe Controllers 00:12:51.784 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:51.784 Controller IO queue size 128, less than required. 00:12:51.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:51.784 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:51.784 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:51.784 Initialization complete. Launching workers. 00:12:51.784 ======================================================== 00:12:51.784 Latency(us) 00:12:51.784 Device Information : IOPS MiB/s Average min max 00:12:51.784 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16030.40 62.62 7987.34 2114.49 47798.53 00:12:51.784 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15902.50 62.12 8051.11 1452.90 49996.65 00:12:51.784 ======================================================== 00:12:51.784 Total : 31932.90 124.74 8019.09 1452.90 49996.65 00:12:51.784 00:12:51.784 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:51.784 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac9c1cab-5866-4b84-b7f5-831ef04d951a 00:12:51.784 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85eb56a3-ff69-4294-812e-f5a8efc7d663 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:52.043 rmmod nvme_rdma 00:12:52.043 rmmod nvme_fabrics 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3571827 ']' 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3571827 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3571827 ']' 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3571827 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3571827 00:12:52.043 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.044 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.044 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3571827' 00:12:52.044 killing process with pid 3571827 00:12:52.044 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3571827 00:12:52.044 [2024-05-15 12:56:29.817160] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:52.044 12:56:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3571827 00:12:52.044 [2024-05-15 12:56:29.887340] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:52.302 12:56:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.303 12:56:30 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:52.303 00:12:52.303 real 0m21.469s 00:12:52.303 user 1m11.424s 00:12:52.303 sys 0m5.907s 00:12:52.303 12:56:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.303 12:56:30 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:52.303 ************************************ 00:12:52.303 END TEST nvmf_lvol 00:12:52.303 ************************************ 00:12:52.303 12:56:30 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:52.303 12:56:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.562 12:56:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.562 12:56:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:52.562 ************************************ 00:12:52.562 START TEST nvmf_lvs_grow 00:12:52.562 ************************************ 00:12:52.562 12:56:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:52.562 * Looking for test storage... 00:12:52.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.563 12:56:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.279 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:59.280 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:59.280 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:59.280 Found net devices under 0000:18:00.0: mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:59.280 Found net devices under 0000:18:00.1: mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:59.280 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.280 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:59.280 altname enp24s0f0np0 00:12:59.280 altname ens785f0np0 00:12:59.280 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.280 valid_lft forever preferred_lft forever 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:59.280 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.280 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:59.280 altname enp24s0f1np1 00:12:59.280 altname ens785f1np1 00:12:59.280 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.280 valid_lft forever preferred_lft forever 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.280 192.168.100.9' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:59.280 192.168.100.9' 00:12:59.280 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:59.281 192.168.100.9' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3576597 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3576597 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3576597 ']' 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.281 12:56:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.281 [2024-05-15 12:56:36.431183] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:12:59.281 [2024-05-15 12:56:36.431245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.281 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.281 [2024-05-15 12:56:36.505799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.281 [2024-05-15 12:56:36.593336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.281 [2024-05-15 12:56:36.593378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.281 [2024-05-15 12:56:36.593387] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.281 [2024-05-15 12:56:36.593411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.281 [2024-05-15 12:56:36.593419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.281 [2024-05-15 12:56:36.593445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.539 12:56:37 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:59.797 [2024-05-15 12:56:37.463922] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e45000/0x1e494f0) succeed. 00:12:59.797 [2024-05-15 12:56:37.473196] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e46500/0x1e8ab80) succeed. 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.797 ************************************ 00:12:59.797 START TEST lvs_grow_clean 00:12:59.797 ************************************ 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.797 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:00.055 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:00.055 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:00.314 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:00.314 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:00.314 12:56:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:00.314 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:00.314 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:00.314 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 lvol 150 00:13:00.574 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf013ec8-3dd3-4763-9917-27f268808af0 00:13:00.574 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:00.574 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:00.574 [2024-05-15 12:56:38.455657] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:00.574 [2024-05-15 12:56:38.455734] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:00.834 true 00:13:00.834 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:00.834 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:00.834 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:00.834 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:01.093 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf013ec8-3dd3-4763-9917-27f268808af0 00:13:01.352 12:56:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:01.352 [2024-05-15 12:56:39.133550] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:01.352 [2024-05-15 12:56:39.133947] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.352 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3577012 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3577012 /var/tmp/bdevperf.sock 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3577012 ']' 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:01.611 12:56:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:01.611 [2024-05-15 12:56:39.360694] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:01.611 [2024-05-15 12:56:39.360758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577012 ] 00:13:01.611 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.611 [2024-05-15 12:56:39.433285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.870 [2024-05-15 12:56:39.524255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.439 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.439 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:13:02.439 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:02.698 Nvme0n1 00:13:02.698 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:02.957 [ 00:13:02.957 { 00:13:02.957 "name": "Nvme0n1", 00:13:02.957 "aliases": [ 00:13:02.957 "bf013ec8-3dd3-4763-9917-27f268808af0" 00:13:02.957 ], 00:13:02.957 "product_name": "NVMe disk", 00:13:02.957 "block_size": 4096, 00:13:02.957 "num_blocks": 38912, 00:13:02.957 "uuid": "bf013ec8-3dd3-4763-9917-27f268808af0", 00:13:02.957 "assigned_rate_limits": { 00:13:02.957 "rw_ios_per_sec": 0, 00:13:02.957 "rw_mbytes_per_sec": 0, 00:13:02.957 "r_mbytes_per_sec": 0, 00:13:02.957 "w_mbytes_per_sec": 0 00:13:02.957 }, 00:13:02.957 "claimed": false, 00:13:02.957 "zoned": false, 00:13:02.957 "supported_io_types": { 00:13:02.957 "read": true, 00:13:02.957 "write": true, 00:13:02.957 "unmap": true, 00:13:02.957 "write_zeroes": true, 00:13:02.957 "flush": true, 00:13:02.957 "reset": true, 00:13:02.957 "compare": true, 00:13:02.957 "compare_and_write": true, 00:13:02.957 "abort": true, 00:13:02.957 "nvme_admin": true, 00:13:02.957 "nvme_io": true 00:13:02.957 }, 00:13:02.957 "memory_domains": [ 00:13:02.957 { 00:13:02.957 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:02.957 "dma_device_type": 0 00:13:02.957 } 00:13:02.957 ], 00:13:02.957 "driver_specific": { 00:13:02.957 "nvme": [ 00:13:02.957 { 00:13:02.957 "trid": { 00:13:02.957 "trtype": "RDMA", 00:13:02.957 "adrfam": "IPv4", 00:13:02.957 "traddr": "192.168.100.8", 00:13:02.957 "trsvcid": "4420", 00:13:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:02.957 }, 00:13:02.957 "ctrlr_data": { 00:13:02.957 "cntlid": 1, 00:13:02.957 "vendor_id": "0x8086", 00:13:02.957 "model_number": "SPDK bdev Controller", 00:13:02.957 "serial_number": "SPDK0", 00:13:02.957 "firmware_revision": "24.05", 00:13:02.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:02.957 "oacs": { 00:13:02.957 "security": 0, 00:13:02.957 "format": 0, 00:13:02.957 "firmware": 0, 00:13:02.957 "ns_manage": 0 00:13:02.957 }, 00:13:02.957 "multi_ctrlr": true, 00:13:02.957 "ana_reporting": false 00:13:02.957 }, 00:13:02.957 "vs": { 00:13:02.957 "nvme_version": "1.3" 00:13:02.957 }, 00:13:02.957 "ns_data": { 00:13:02.957 "id": 1, 00:13:02.957 "can_share": true 00:13:02.957 } 00:13:02.957 } 00:13:02.957 ], 00:13:02.957 "mp_policy": "active_passive" 00:13:02.957 } 00:13:02.957 } 00:13:02.957 ] 00:13:02.957 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3577200 00:13:02.957 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:02.957 12:56:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:02.957 Running I/O for 10 seconds... 00:13:03.893 Latency(us) 00:13:03.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.894 Nvme0n1 : 1.00 34145.00 133.38 0.00 0.00 0.00 0.00 0.00 00:13:03.894 =================================================================================================================== 00:13:03.894 Total : 34145.00 133.38 0.00 0.00 0.00 0.00 0.00 00:13:03.894 00:13:04.830 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:04.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.831 Nvme0n1 : 2.00 34482.00 134.70 0.00 0.00 0.00 0.00 0.00 00:13:04.831 =================================================================================================================== 00:13:04.831 Total : 34482.00 134.70 0.00 0.00 0.00 0.00 0.00 00:13:04.831 00:13:05.090 true 00:13:05.090 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:05.090 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:05.350 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:05.350 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:05.350 12:56:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3577200 00:13:05.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.916 Nvme0n1 : 3.00 34550.00 134.96 0.00 0.00 0.00 0.00 0.00 00:13:05.916 =================================================================================================================== 00:13:05.916 Total : 34550.00 134.96 0.00 0.00 0.00 0.00 0.00 00:13:05.916 00:13:06.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.854 Nvme0n1 : 4.00 34639.25 135.31 0.00 0.00 0.00 0.00 0.00 00:13:06.854 =================================================================================================================== 00:13:06.854 Total : 34639.25 135.31 0.00 0.00 0.00 0.00 0.00 00:13:06.854 00:13:08.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.235 Nvme0n1 : 5.00 34714.80 135.60 0.00 0.00 0.00 0.00 0.00 00:13:08.235 =================================================================================================================== 00:13:08.235 Total : 34714.80 135.60 0.00 0.00 0.00 0.00 0.00 00:13:08.235 00:13:09.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.171 Nvme0n1 : 6.00 34753.17 135.75 0.00 0.00 0.00 0.00 0.00 00:13:09.171 =================================================================================================================== 00:13:09.171 Total : 34753.17 135.75 0.00 0.00 0.00 0.00 0.00 00:13:09.171 00:13:10.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.108 Nvme0n1 : 7.00 34774.43 135.84 0.00 0.00 0.00 0.00 0.00 00:13:10.108 =================================================================================================================== 00:13:10.108 Total : 34774.43 135.84 0.00 0.00 0.00 0.00 0.00 00:13:10.108 00:13:11.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.047 Nvme0n1 : 8.00 34784.88 135.88 0.00 0.00 0.00 0.00 0.00 00:13:11.047 =================================================================================================================== 00:13:11.047 Total : 34784.88 135.88 0.00 0.00 0.00 0.00 0.00 00:13:11.047 00:13:11.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.985 Nvme0n1 : 9.00 34812.11 135.98 0.00 0.00 0.00 0.00 0.00 00:13:11.985 =================================================================================================================== 00:13:11.985 Total : 34812.11 135.98 0.00 0.00 0.00 0.00 0.00 00:13:11.985 00:13:12.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.922 Nvme0n1 : 10.00 34780.50 135.86 0.00 0.00 0.00 0.00 0.00 00:13:12.922 =================================================================================================================== 00:13:12.922 Total : 34780.50 135.86 0.00 0.00 0.00 0.00 0.00 00:13:12.922 00:13:12.922 00:13:12.922 Latency(us) 00:13:12.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.922 Nvme0n1 : 10.00 34780.33 135.86 0.00 0.00 3677.45 2521.71 10086.85 00:13:12.922 =================================================================================================================== 00:13:12.922 Total : 34780.33 135.86 0.00 0.00 3677.45 2521.71 10086.85 00:13:12.922 0 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3577012 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3577012 ']' 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3577012 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3577012 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3577012' 00:13:12.922 killing process with pid 3577012 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3577012 00:13:12.922 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.922 00:13:12.922 Latency(us) 00:13:12.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.922 =================================================================================================================== 00:13:12.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.922 12:56:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3577012 00:13:13.181 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:13.441 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:13.701 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:13.701 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:13.701 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:13.701 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:13.701 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:13.960 [2024-05-15 12:56:51.708016] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:13.960 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:14.219 request: 00:13:14.219 { 00:13:14.219 "uuid": "93b2d38b-28f2-471e-9e8e-196cd381cab6", 00:13:14.219 "method": "bdev_lvol_get_lvstores", 00:13:14.219 "req_id": 1 00:13:14.219 } 00:13:14.219 Got JSON-RPC error response 00:13:14.219 response: 00:13:14.219 { 00:13:14.219 "code": -19, 00:13:14.219 "message": "No such device" 00:13:14.219 } 00:13:14.219 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:14.219 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:14.219 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:14.219 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:14.220 12:56:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:14.220 aio_bdev 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf013ec8-3dd3-4763-9917-27f268808af0 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=bf013ec8-3dd3-4763-9917-27f268808af0 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:14.479 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bf013ec8-3dd3-4763-9917-27f268808af0 -t 2000 00:13:14.738 [ 00:13:14.739 { 00:13:14.739 "name": "bf013ec8-3dd3-4763-9917-27f268808af0", 00:13:14.739 "aliases": [ 00:13:14.739 "lvs/lvol" 00:13:14.739 ], 00:13:14.739 "product_name": "Logical Volume", 00:13:14.739 "block_size": 4096, 00:13:14.739 "num_blocks": 38912, 00:13:14.739 "uuid": "bf013ec8-3dd3-4763-9917-27f268808af0", 00:13:14.739 "assigned_rate_limits": { 00:13:14.739 "rw_ios_per_sec": 0, 00:13:14.739 "rw_mbytes_per_sec": 0, 00:13:14.739 "r_mbytes_per_sec": 0, 00:13:14.739 "w_mbytes_per_sec": 0 00:13:14.739 }, 00:13:14.739 "claimed": false, 00:13:14.739 "zoned": false, 00:13:14.739 "supported_io_types": { 00:13:14.739 "read": true, 00:13:14.739 "write": true, 00:13:14.739 "unmap": true, 00:13:14.739 "write_zeroes": true, 00:13:14.739 "flush": false, 00:13:14.739 "reset": true, 00:13:14.739 "compare": false, 00:13:14.739 "compare_and_write": false, 00:13:14.739 "abort": false, 00:13:14.739 "nvme_admin": false, 00:13:14.739 "nvme_io": false 00:13:14.739 }, 00:13:14.739 "driver_specific": { 00:13:14.739 "lvol": { 00:13:14.739 "lvol_store_uuid": "93b2d38b-28f2-471e-9e8e-196cd381cab6", 00:13:14.739 "base_bdev": "aio_bdev", 00:13:14.739 "thin_provision": false, 00:13:14.739 "num_allocated_clusters": 38, 00:13:14.739 "snapshot": false, 00:13:14.739 "clone": false, 00:13:14.739 "esnap_clone": false 00:13:14.739 } 00:13:14.739 } 00:13:14.739 } 00:13:14.739 ] 00:13:14.739 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:13:14.739 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:14.739 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:14.997 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:14.997 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:14.997 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:14.997 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:14.997 12:56:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf013ec8-3dd3-4763-9917-27f268808af0 00:13:15.256 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93b2d38b-28f2-471e-9e8e-196cd381cab6 00:13:15.515 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:15.515 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:15.774 00:13:15.774 real 0m15.835s 00:13:15.774 user 0m15.700s 00:13:15.774 sys 0m1.275s 00:13:15.774 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.774 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:15.774 ************************************ 00:13:15.774 END TEST lvs_grow_clean 00:13:15.774 ************************************ 00:13:15.774 12:56:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:15.774 12:56:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:15.775 ************************************ 00:13:15.775 START TEST lvs_grow_dirty 00:13:15.775 ************************************ 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:15.775 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:16.034 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:16.034 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:16.034 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3e0ff240-2812-40e4-8797-4c21a188adec 00:13:16.034 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:16.034 12:56:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:16.293 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:16.293 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:16.293 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3e0ff240-2812-40e4-8797-4c21a188adec lvol 150 00:13:16.552 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:16.552 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.552 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:16.552 [2024-05-15 12:56:54.396883] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:16.552 [2024-05-15 12:56:54.396962] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:16.552 true 00:13:16.552 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:16.552 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:16.812 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:16.812 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:17.072 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:17.072 12:56:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:17.331 [2024-05-15 12:56:55.067035] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:17.331 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:17.591 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3579247 00:13:17.591 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:17.591 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.591 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3579247 /var/tmp/bdevperf.sock 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3579247 ']' 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.592 12:56:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:17.592 [2024-05-15 12:56:55.288804] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:17.592 [2024-05-15 12:56:55.288861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579247 ] 00:13:17.592 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.592 [2024-05-15 12:56:55.359709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.592 [2024-05-15 12:56:55.446039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.531 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:18.531 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:18.531 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:18.531 Nvme0n1 00:13:18.531 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:18.791 [ 00:13:18.791 { 00:13:18.791 "name": "Nvme0n1", 00:13:18.791 "aliases": [ 00:13:18.791 "a23e87b7-acb0-46e5-b71a-40df739b63a4" 00:13:18.791 ], 00:13:18.791 "product_name": "NVMe disk", 00:13:18.791 "block_size": 4096, 00:13:18.791 "num_blocks": 38912, 00:13:18.791 "uuid": "a23e87b7-acb0-46e5-b71a-40df739b63a4", 00:13:18.791 "assigned_rate_limits": { 00:13:18.791 "rw_ios_per_sec": 0, 00:13:18.791 "rw_mbytes_per_sec": 0, 00:13:18.791 "r_mbytes_per_sec": 0, 00:13:18.791 "w_mbytes_per_sec": 0 00:13:18.791 }, 00:13:18.791 "claimed": false, 00:13:18.791 "zoned": false, 00:13:18.791 "supported_io_types": { 00:13:18.791 "read": true, 00:13:18.791 "write": true, 00:13:18.791 "unmap": true, 00:13:18.791 "write_zeroes": true, 00:13:18.791 "flush": true, 00:13:18.791 "reset": true, 00:13:18.791 "compare": true, 00:13:18.791 "compare_and_write": true, 00:13:18.791 "abort": true, 00:13:18.791 "nvme_admin": true, 00:13:18.791 "nvme_io": true 00:13:18.791 }, 00:13:18.791 "memory_domains": [ 00:13:18.791 { 00:13:18.791 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:18.791 "dma_device_type": 0 00:13:18.791 } 00:13:18.791 ], 00:13:18.791 "driver_specific": { 00:13:18.791 "nvme": [ 00:13:18.791 { 00:13:18.791 "trid": { 00:13:18.791 "trtype": "RDMA", 00:13:18.791 "adrfam": "IPv4", 00:13:18.791 "traddr": "192.168.100.8", 00:13:18.791 "trsvcid": "4420", 00:13:18.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:18.791 }, 00:13:18.791 "ctrlr_data": { 00:13:18.791 "cntlid": 1, 00:13:18.791 "vendor_id": "0x8086", 00:13:18.791 "model_number": "SPDK bdev Controller", 00:13:18.791 "serial_number": "SPDK0", 00:13:18.791 "firmware_revision": "24.05", 00:13:18.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:18.791 "oacs": { 00:13:18.791 "security": 0, 00:13:18.791 "format": 0, 00:13:18.791 "firmware": 0, 00:13:18.791 "ns_manage": 0 00:13:18.791 }, 00:13:18.791 "multi_ctrlr": true, 00:13:18.791 "ana_reporting": false 00:13:18.791 }, 00:13:18.791 "vs": { 00:13:18.791 "nvme_version": "1.3" 00:13:18.791 }, 00:13:18.791 "ns_data": { 00:13:18.791 "id": 1, 00:13:18.791 "can_share": true 00:13:18.791 } 00:13:18.791 } 00:13:18.791 ], 00:13:18.791 "mp_policy": "active_passive" 00:13:18.791 } 00:13:18.791 } 00:13:18.791 ] 00:13:18.791 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3579431 00:13:18.791 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:18.791 12:56:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:18.791 Running I/O for 10 seconds... 00:13:20.171 Latency(us) 00:13:20.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.171 Nvme0n1 : 1.00 34180.00 133.52 0.00 0.00 0.00 0.00 0.00 00:13:20.171 =================================================================================================================== 00:13:20.171 Total : 34180.00 133.52 0.00 0.00 0.00 0.00 0.00 00:13:20.171 00:13:20.801 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:20.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.801 Nvme0n1 : 2.00 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:13:20.801 =================================================================================================================== 00:13:20.801 Total : 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:13:20.801 00:13:21.061 true 00:13:21.061 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:21.061 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:21.061 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:21.061 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:21.061 12:56:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3579431 00:13:22.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.000 Nvme0n1 : 3.00 34560.00 135.00 0.00 0.00 0.00 0.00 0.00 00:13:22.000 =================================================================================================================== 00:13:22.000 Total : 34560.00 135.00 0.00 0.00 0.00 0.00 0.00 00:13:22.000 00:13:22.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.938 Nvme0n1 : 4.00 34480.25 134.69 0.00 0.00 0.00 0.00 0.00 00:13:22.938 =================================================================================================================== 00:13:22.938 Total : 34480.25 134.69 0.00 0.00 0.00 0.00 0.00 00:13:22.938 00:13:23.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.877 Nvme0n1 : 5.00 34541.40 134.93 0.00 0.00 0.00 0.00 0.00 00:13:23.877 =================================================================================================================== 00:13:23.877 Total : 34541.40 134.93 0.00 0.00 0.00 0.00 0.00 00:13:23.877 00:13:24.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.813 Nvme0n1 : 6.00 34603.17 135.17 0.00 0.00 0.00 0.00 0.00 00:13:24.813 =================================================================================================================== 00:13:24.813 Total : 34603.17 135.17 0.00 0.00 0.00 0.00 0.00 00:13:24.813 00:13:26.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.190 Nvme0n1 : 7.00 34647.57 135.34 0.00 0.00 0.00 0.00 0.00 00:13:26.190 =================================================================================================================== 00:13:26.190 Total : 34647.57 135.34 0.00 0.00 0.00 0.00 0.00 00:13:26.190 00:13:27.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.127 Nvme0n1 : 8.00 34592.75 135.13 0.00 0.00 0.00 0.00 0.00 00:13:27.127 =================================================================================================================== 00:13:27.127 Total : 34592.75 135.13 0.00 0.00 0.00 0.00 0.00 00:13:27.127 00:13:28.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.065 Nvme0n1 : 9.00 34596.11 135.14 0.00 0.00 0.00 0.00 0.00 00:13:28.065 =================================================================================================================== 00:13:28.065 Total : 34596.11 135.14 0.00 0.00 0.00 0.00 0.00 00:13:28.065 00:13:29.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.003 Nvme0n1 : 10.00 34614.80 135.21 0.00 0.00 0.00 0.00 0.00 00:13:29.003 =================================================================================================================== 00:13:29.003 Total : 34614.80 135.21 0.00 0.00 0.00 0.00 0.00 00:13:29.003 00:13:29.003 00:13:29.003 Latency(us) 00:13:29.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.003 Nvme0n1 : 10.00 34615.55 135.22 0.00 0.00 3694.74 2179.78 8320.22 00:13:29.003 =================================================================================================================== 00:13:29.003 Total : 34615.55 135.22 0.00 0.00 3694.74 2179.78 8320.22 00:13:29.003 0 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3579247 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3579247 ']' 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3579247 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3579247 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3579247' 00:13:29.003 killing process with pid 3579247 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3579247 00:13:29.003 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.003 00:13:29.003 Latency(us) 00:13:29.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.003 =================================================================================================================== 00:13:29.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.003 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3579247 00:13:29.262 12:57:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:29.521 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:29.521 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:29.521 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3576597 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3576597 00:13:29.782 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3576597 Killed "${NVMF_APP[@]}" "$@" 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3581394 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3581394 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3581394 ']' 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:29.782 12:57:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 [2024-05-15 12:57:07.640439] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:29.782 [2024-05-15 12:57:07.640499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.041 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.041 [2024-05-15 12:57:07.713127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.041 [2024-05-15 12:57:07.802929] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.041 [2024-05-15 12:57:07.802969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.041 [2024-05-15 12:57:07.802978] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.041 [2024-05-15 12:57:07.803003] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.041 [2024-05-15 12:57:07.803010] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.041 [2024-05-15 12:57:07.803031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.608 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:30.867 [2024-05-15 12:57:08.647105] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:30.867 [2024-05-15 12:57:08.647208] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:30.867 [2024-05-15 12:57:08.647236] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:30.867 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:31.126 12:57:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a23e87b7-acb0-46e5-b71a-40df739b63a4 -t 2000 00:13:31.386 [ 00:13:31.386 { 00:13:31.386 "name": "a23e87b7-acb0-46e5-b71a-40df739b63a4", 00:13:31.386 "aliases": [ 00:13:31.386 "lvs/lvol" 00:13:31.386 ], 00:13:31.386 "product_name": "Logical Volume", 00:13:31.386 "block_size": 4096, 00:13:31.386 "num_blocks": 38912, 00:13:31.386 "uuid": "a23e87b7-acb0-46e5-b71a-40df739b63a4", 00:13:31.386 "assigned_rate_limits": { 00:13:31.386 "rw_ios_per_sec": 0, 00:13:31.386 "rw_mbytes_per_sec": 0, 00:13:31.386 "r_mbytes_per_sec": 0, 00:13:31.386 "w_mbytes_per_sec": 0 00:13:31.386 }, 00:13:31.386 "claimed": false, 00:13:31.386 "zoned": false, 00:13:31.386 "supported_io_types": { 00:13:31.386 "read": true, 00:13:31.386 "write": true, 00:13:31.386 "unmap": true, 00:13:31.386 "write_zeroes": true, 00:13:31.386 "flush": false, 00:13:31.386 "reset": true, 00:13:31.386 "compare": false, 00:13:31.386 "compare_and_write": false, 00:13:31.386 "abort": false, 00:13:31.386 "nvme_admin": false, 00:13:31.386 "nvme_io": false 00:13:31.386 }, 00:13:31.386 "driver_specific": { 00:13:31.386 "lvol": { 00:13:31.386 "lvol_store_uuid": "3e0ff240-2812-40e4-8797-4c21a188adec", 00:13:31.386 "base_bdev": "aio_bdev", 00:13:31.386 "thin_provision": false, 00:13:31.386 "num_allocated_clusters": 38, 00:13:31.386 "snapshot": false, 00:13:31.386 "clone": false, 00:13:31.386 "esnap_clone": false 00:13:31.386 } 00:13:31.386 } 00:13:31.386 } 00:13:31.386 ] 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:31.386 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:31.643 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:31.643 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:31.902 [2024-05-15 12:57:09.559521] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:31.902 request: 00:13:31.902 { 00:13:31.902 "uuid": "3e0ff240-2812-40e4-8797-4c21a188adec", 00:13:31.902 "method": "bdev_lvol_get_lvstores", 00:13:31.902 "req_id": 1 00:13:31.902 } 00:13:31.902 Got JSON-RPC error response 00:13:31.902 response: 00:13:31.902 { 00:13:31.902 "code": -19, 00:13:31.902 "message": "No such device" 00:13:31.902 } 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:31.902 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:32.161 aio_bdev 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:32.161 12:57:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:32.420 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a23e87b7-acb0-46e5-b71a-40df739b63a4 -t 2000 00:13:32.420 [ 00:13:32.420 { 00:13:32.420 "name": "a23e87b7-acb0-46e5-b71a-40df739b63a4", 00:13:32.420 "aliases": [ 00:13:32.420 "lvs/lvol" 00:13:32.420 ], 00:13:32.420 "product_name": "Logical Volume", 00:13:32.420 "block_size": 4096, 00:13:32.420 "num_blocks": 38912, 00:13:32.420 "uuid": "a23e87b7-acb0-46e5-b71a-40df739b63a4", 00:13:32.420 "assigned_rate_limits": { 00:13:32.420 "rw_ios_per_sec": 0, 00:13:32.420 "rw_mbytes_per_sec": 0, 00:13:32.420 "r_mbytes_per_sec": 0, 00:13:32.420 "w_mbytes_per_sec": 0 00:13:32.420 }, 00:13:32.420 "claimed": false, 00:13:32.420 "zoned": false, 00:13:32.420 "supported_io_types": { 00:13:32.420 "read": true, 00:13:32.420 "write": true, 00:13:32.420 "unmap": true, 00:13:32.420 "write_zeroes": true, 00:13:32.420 "flush": false, 00:13:32.420 "reset": true, 00:13:32.420 "compare": false, 00:13:32.420 "compare_and_write": false, 00:13:32.420 "abort": false, 00:13:32.420 "nvme_admin": false, 00:13:32.420 "nvme_io": false 00:13:32.420 }, 00:13:32.420 "driver_specific": { 00:13:32.420 "lvol": { 00:13:32.420 "lvol_store_uuid": "3e0ff240-2812-40e4-8797-4c21a188adec", 00:13:32.420 "base_bdev": "aio_bdev", 00:13:32.420 "thin_provision": false, 00:13:32.420 "num_allocated_clusters": 38, 00:13:32.420 "snapshot": false, 00:13:32.420 "clone": false, 00:13:32.420 "esnap_clone": false 00:13:32.420 } 00:13:32.420 } 00:13:32.420 } 00:13:32.420 ] 00:13:32.420 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:32.680 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:32.680 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:32.680 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:32.680 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:32.680 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:32.938 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:32.938 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a23e87b7-acb0-46e5-b71a-40df739b63a4 00:13:33.195 12:57:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e0ff240-2812-40e4-8797-4c21a188adec 00:13:33.195 12:57:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:33.454 00:13:33.454 real 0m17.764s 00:13:33.454 user 0m45.838s 00:13:33.454 sys 0m3.433s 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:33.454 ************************************ 00:13:33.454 END TEST lvs_grow_dirty 00:13:33.454 ************************************ 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:33.454 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:33.454 nvmf_trace.0 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:33.712 rmmod nvme_rdma 00:13:33.712 rmmod nvme_fabrics 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3581394 ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3581394 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3581394 ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3581394 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3581394 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3581394' 00:13:33.712 killing process with pid 3581394 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3581394 00:13:33.712 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3581394 00:13:33.970 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.970 12:57:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:33.970 00:13:33.970 real 0m41.455s 00:13:33.970 user 1m7.684s 00:13:33.970 sys 0m9.880s 00:13:33.970 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.970 12:57:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:33.970 ************************************ 00:13:33.970 END TEST nvmf_lvs_grow 00:13:33.970 ************************************ 00:13:33.970 12:57:11 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:33.970 12:57:11 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:33.970 12:57:11 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.970 12:57:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:33.970 ************************************ 00:13:33.970 START TEST nvmf_bdev_io_wait 00:13:33.970 ************************************ 00:13:33.970 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:33.970 * Looking for test storage... 00:13:33.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:33.970 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.970 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.228 12:57:11 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:40.795 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:40.795 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:40.795 Found net devices under 0000:18:00.0: mlx_0_0 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:40.795 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:40.796 Found net devices under 0000:18:00.1: mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:40.796 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.796 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:40.796 altname enp24s0f0np0 00:13:40.796 altname ens785f0np0 00:13:40.796 inet 192.168.100.8/24 scope global mlx_0_0 00:13:40.796 valid_lft forever preferred_lft forever 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:40.796 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:40.796 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:40.796 altname enp24s0f1np1 00:13:40.796 altname ens785f1np1 00:13:40.796 inet 192.168.100.9/24 scope global mlx_0_1 00:13:40.796 valid_lft forever preferred_lft forever 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:40.796 192.168.100.9' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:40.796 192.168.100.9' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:40.796 192.168.100.9' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3584728 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3584728 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3584728 ']' 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:40.796 12:57:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 [2024-05-15 12:57:17.656401] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:40.796 [2024-05-15 12:57:17.656459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.796 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.796 [2024-05-15 12:57:17.727770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.796 [2024-05-15 12:57:17.816244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.796 [2024-05-15 12:57:17.816285] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.796 [2024-05-15 12:57:17.816295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.796 [2024-05-15 12:57:17.816304] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.796 [2024-05-15 12:57:17.816311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.796 [2024-05-15 12:57:17.816361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.796 [2024-05-15 12:57:17.816449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.796 [2024-05-15 12:57:17.816525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.796 [2024-05-15 12:57:17.816527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.796 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 [2024-05-15 12:57:18.616548] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1162120/0x1166610) succeed. 00:13:40.796 [2024-05-15 12:57:18.626637] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1163760/0x11a7ca0) succeed. 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.055 Malloc0 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.055 [2024-05-15 12:57:18.823529] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:41.055 [2024-05-15 12:57:18.823930] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3584927 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3584929 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.055 { 00:13:41.055 "params": { 00:13:41.055 "name": "Nvme$subsystem", 00:13:41.055 "trtype": "$TEST_TRANSPORT", 00:13:41.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.055 "adrfam": "ipv4", 00:13:41.055 "trsvcid": "$NVMF_PORT", 00:13:41.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.055 "hdgst": ${hdgst:-false}, 00:13:41.055 "ddgst": ${ddgst:-false} 00:13:41.055 }, 00:13:41.055 "method": "bdev_nvme_attach_controller" 00:13:41.055 } 00:13:41.055 EOF 00:13:41.055 )") 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3584931 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.055 { 00:13:41.055 "params": { 00:13:41.055 "name": "Nvme$subsystem", 00:13:41.055 "trtype": "$TEST_TRANSPORT", 00:13:41.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.055 "adrfam": "ipv4", 00:13:41.055 "trsvcid": "$NVMF_PORT", 00:13:41.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.055 "hdgst": ${hdgst:-false}, 00:13:41.055 "ddgst": ${ddgst:-false} 00:13:41.055 }, 00:13:41.055 "method": "bdev_nvme_attach_controller" 00:13:41.055 } 00:13:41.055 EOF 00:13:41.055 )") 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3584934 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.055 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.055 { 00:13:41.055 "params": { 00:13:41.055 "name": "Nvme$subsystem", 00:13:41.055 "trtype": "$TEST_TRANSPORT", 00:13:41.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.055 "adrfam": "ipv4", 00:13:41.055 "trsvcid": "$NVMF_PORT", 00:13:41.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.055 "hdgst": ${hdgst:-false}, 00:13:41.055 "ddgst": ${ddgst:-false} 00:13:41.055 }, 00:13:41.055 "method": "bdev_nvme_attach_controller" 00:13:41.055 } 00:13:41.055 EOF 00:13:41.055 )") 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.056 { 00:13:41.056 "params": { 00:13:41.056 "name": "Nvme$subsystem", 00:13:41.056 "trtype": "$TEST_TRANSPORT", 00:13:41.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.056 "adrfam": "ipv4", 00:13:41.056 "trsvcid": "$NVMF_PORT", 00:13:41.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.056 "hdgst": ${hdgst:-false}, 00:13:41.056 "ddgst": ${ddgst:-false} 00:13:41.056 }, 00:13:41.056 "method": "bdev_nvme_attach_controller" 00:13:41.056 } 00:13:41.056 EOF 00:13:41.056 )") 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3584927 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.056 "params": { 00:13:41.056 "name": "Nvme1", 00:13:41.056 "trtype": "rdma", 00:13:41.056 "traddr": "192.168.100.8", 00:13:41.056 "adrfam": "ipv4", 00:13:41.056 "trsvcid": "4420", 00:13:41.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.056 "hdgst": false, 00:13:41.056 "ddgst": false 00:13:41.056 }, 00:13:41.056 "method": "bdev_nvme_attach_controller" 00:13:41.056 }' 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.056 "params": { 00:13:41.056 "name": "Nvme1", 00:13:41.056 "trtype": "rdma", 00:13:41.056 "traddr": "192.168.100.8", 00:13:41.056 "adrfam": "ipv4", 00:13:41.056 "trsvcid": "4420", 00:13:41.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.056 "hdgst": false, 00:13:41.056 "ddgst": false 00:13:41.056 }, 00:13:41.056 "method": "bdev_nvme_attach_controller" 00:13:41.056 }' 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.056 "params": { 00:13:41.056 "name": "Nvme1", 00:13:41.056 "trtype": "rdma", 00:13:41.056 "traddr": "192.168.100.8", 00:13:41.056 "adrfam": "ipv4", 00:13:41.056 "trsvcid": "4420", 00:13:41.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.056 "hdgst": false, 00:13:41.056 "ddgst": false 00:13:41.056 }, 00:13:41.056 "method": "bdev_nvme_attach_controller" 00:13:41.056 }' 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:41.056 12:57:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.056 "params": { 00:13:41.056 "name": "Nvme1", 00:13:41.056 "trtype": "rdma", 00:13:41.056 "traddr": "192.168.100.8", 00:13:41.056 "adrfam": "ipv4", 00:13:41.056 "trsvcid": "4420", 00:13:41.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.056 "hdgst": false, 00:13:41.056 "ddgst": false 00:13:41.056 }, 00:13:41.056 "method": "bdev_nvme_attach_controller" 00:13:41.056 }' 00:13:41.056 [2024-05-15 12:57:18.877556] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:41.056 [2024-05-15 12:57:18.877560] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:41.056 [2024-05-15 12:57:18.877559] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:41.056 [2024-05-15 12:57:18.877627] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 12:57:18.877628] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 12:57:18.877629] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:41.056 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:41.056 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:41.056 [2024-05-15 12:57:18.881617] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:41.056 [2024-05-15 12:57:18.881670] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:41.314 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.314 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.314 [2024-05-15 12:57:19.085842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.314 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.314 [2024-05-15 12:57:19.168297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:41.314 [2024-05-15 12:57:19.186864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.572 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.572 [2024-05-15 12:57:19.269083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:41.572 [2024-05-15 12:57:19.288585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.572 [2024-05-15 12:57:19.349799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.572 [2024-05-15 12:57:19.376344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:41.572 [2024-05-15 12:57:19.431091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:41.830 Running I/O for 1 seconds... 00:13:41.830 Running I/O for 1 seconds... 00:13:41.830 Running I/O for 1 seconds... 00:13:41.830 Running I/O for 1 seconds... 00:13:42.768 00:13:42.768 Latency(us) 00:13:42.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.768 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:42.768 Nvme1n1 : 1.01 17380.35 67.89 0.00 0.00 7341.20 4359.57 12879.25 00:13:42.768 =================================================================================================================== 00:13:42.768 Total : 17380.35 67.89 0.00 0.00 7341.20 4359.57 12879.25 00:13:42.768 00:13:42.768 Latency(us) 00:13:42.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.768 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:42.768 Nvme1n1 : 1.01 14012.83 54.74 0.00 0.00 9104.10 5727.28 17780.20 00:13:42.768 =================================================================================================================== 00:13:42.768 Total : 14012.83 54.74 0.00 0.00 9104.10 5727.28 17780.20 00:13:42.768 00:13:42.768 Latency(us) 00:13:42.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.768 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:42.768 Nvme1n1 : 1.00 253444.58 990.02 0.00 0.00 503.45 211.92 1894.85 00:13:42.768 =================================================================================================================== 00:13:42.768 Total : 253444.58 990.02 0.00 0.00 503.45 211.92 1894.85 00:13:42.768 00:13:42.768 Latency(us) 00:13:42.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.768 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:42.769 Nvme1n1 : 1.00 17955.85 70.14 0.00 0.00 7112.73 3462.01 18578.03 00:13:42.769 =================================================================================================================== 00:13:42.769 Total : 17955.85 70.14 0.00 0.00 7112.73 3462.01 18578.03 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3584929 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3584931 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3584934 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.028 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:43.287 rmmod nvme_rdma 00:13:43.287 rmmod nvme_fabrics 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3584728 ']' 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3584728 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3584728 ']' 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3584728 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3584728 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3584728' 00:13:43.287 killing process with pid 3584728 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3584728 00:13:43.287 [2024-05-15 12:57:20.989535] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:43.287 12:57:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3584728 00:13:43.287 [2024-05-15 12:57:21.070751] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:43.546 12:57:21 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.546 12:57:21 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:43.546 00:13:43.546 real 0m9.508s 00:13:43.546 user 0m20.982s 00:13:43.546 sys 0m5.909s 00:13:43.546 12:57:21 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:43.546 12:57:21 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:43.546 ************************************ 00:13:43.546 END TEST nvmf_bdev_io_wait 00:13:43.546 ************************************ 00:13:43.546 12:57:21 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:43.547 12:57:21 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:43.547 12:57:21 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:43.547 12:57:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 ************************************ 00:13:43.547 START TEST nvmf_queue_depth 00:13:43.547 ************************************ 00:13:43.547 12:57:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:43.806 * Looking for test storage... 00:13:43.806 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:43.806 12:57:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.807 12:57:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.409 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:50.410 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:50.410 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:50.410 Found net devices under 0000:18:00.0: mlx_0_0 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:50.410 Found net devices under 0000:18:00.1: mlx_0_1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:50.410 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:50.410 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:50.410 altname enp24s0f0np0 00:13:50.410 altname ens785f0np0 00:13:50.410 inet 192.168.100.8/24 scope global mlx_0_0 00:13:50.410 valid_lft forever preferred_lft forever 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:50.410 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:50.410 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:50.410 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:50.411 altname enp24s0f1np1 00:13:50.411 altname ens785f1np1 00:13:50.411 inet 192.168.100.9/24 scope global mlx_0_1 00:13:50.411 valid_lft forever preferred_lft forever 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:50.411 192.168.100.9' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:50.411 192.168.100.9' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:50.411 192.168.100.9' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3588050 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3588050 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3588050 ']' 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.411 12:57:27 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.411 [2024-05-15 12:57:27.709670] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:50.411 [2024-05-15 12:57:27.709733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.411 [2024-05-15 12:57:27.782118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.411 [2024-05-15 12:57:27.873111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.411 [2024-05-15 12:57:27.873146] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.411 [2024-05-15 12:57:27.873155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.411 [2024-05-15 12:57:27.873164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.411 [2024-05-15 12:57:27.873172] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.411 [2024-05-15 12:57:27.873201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.670 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 [2024-05-15 12:57:28.572915] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x67f300/0x6837f0) succeed. 00:13:50.929 [2024-05-15 12:57:28.582051] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x680800/0x6c4e80) succeed. 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 Malloc0 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 [2024-05-15 12:57:28.667514] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:50.929 [2024-05-15 12:57:28.667869] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3588248 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3588248 /var/tmp/bdevperf.sock 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3588248 ']' 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.929 12:57:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.929 [2024-05-15 12:57:28.717309] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:13:50.929 [2024-05-15 12:57:28.717362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588248 ] 00:13:50.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.929 [2024-05-15 12:57:28.787928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.188 [2024-05-15 12:57:28.869963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.757 12:57:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.757 12:57:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:51.757 12:57:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:51.757 12:57:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.757 12:57:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:51.758 NVMe0n1 00:13:51.758 12:57:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.758 12:57:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:52.017 Running I/O for 10 seconds... 00:14:02.005 00:14:02.005 Latency(us) 00:14:02.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.005 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:02.005 Verification LBA range: start 0x0 length 0x4000 00:14:02.005 NVMe0n1 : 10.03 17583.58 68.69 0.00 0.00 58064.69 5926.73 36928.11 00:14:02.005 =================================================================================================================== 00:14:02.005 Total : 17583.58 68.69 0.00 0.00 58064.69 5926.73 36928.11 00:14:02.005 0 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3588248 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3588248 ']' 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3588248 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3588248 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3588248' 00:14:02.005 killing process with pid 3588248 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3588248 00:14:02.005 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.005 00:14:02.005 Latency(us) 00:14:02.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.005 =================================================================================================================== 00:14:02.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.005 12:57:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3588248 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:02.265 rmmod nvme_rdma 00:14:02.265 rmmod nvme_fabrics 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3588050 ']' 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3588050 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3588050 ']' 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3588050 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:02.265 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3588050 00:14:02.524 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:02.524 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:02.524 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3588050' 00:14:02.524 killing process with pid 3588050 00:14:02.524 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3588050 00:14:02.524 [2024-05-15 12:57:40.173462] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:02.524 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3588050 00:14:02.524 [2024-05-15 12:57:40.214518] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:02.784 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.784 12:57:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:02.784 00:14:02.784 real 0m19.082s 00:14:02.784 user 0m26.087s 00:14:02.784 sys 0m5.386s 00:14:02.784 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.784 12:57:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.784 ************************************ 00:14:02.784 END TEST nvmf_queue_depth 00:14:02.784 ************************************ 00:14:02.784 12:57:40 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:02.784 12:57:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.784 12:57:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.784 12:57:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:02.784 ************************************ 00:14:02.784 START TEST nvmf_target_multipath 00:14:02.784 ************************************ 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:02.784 * Looking for test storage... 00:14:02.784 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.784 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.044 12:57:40 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.045 12:57:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:09.619 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:09.619 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:09.619 Found net devices under 0000:18:00.0: mlx_0_0 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:09.619 Found net devices under 0000:18:00.1: mlx_0_1 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.619 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:09.620 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:09.620 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:09.620 altname enp24s0f0np0 00:14:09.620 altname ens785f0np0 00:14:09.620 inet 192.168.100.8/24 scope global mlx_0_0 00:14:09.620 valid_lft forever preferred_lft forever 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:09.620 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:09.620 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:09.620 altname enp24s0f1np1 00:14:09.620 altname ens785f1np1 00:14:09.620 inet 192.168.100.9/24 scope global mlx_0_1 00:14:09.620 valid_lft forever preferred_lft forever 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:09.620 192.168.100.9' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:09.620 192.168.100.9' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:09.620 192.168.100.9' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:14:09.620 run this test only with TCP transport for now 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:09.620 rmmod nvme_rdma 00:14:09.620 rmmod nvme_fabrics 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:09.620 00:14:09.620 real 0m6.043s 00:14:09.620 user 0m1.706s 00:14:09.620 sys 0m4.524s 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.620 12:57:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:09.620 ************************************ 00:14:09.620 END TEST nvmf_target_multipath 00:14:09.620 ************************************ 00:14:09.620 12:57:46 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:09.620 12:57:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.620 12:57:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.620 12:57:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:09.620 ************************************ 00:14:09.620 START TEST nvmf_zcopy 00:14:09.620 ************************************ 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:09.620 * Looking for test storage... 00:14:09.620 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.620 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.621 12:57:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.894 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.894 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.894 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.894 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:14.895 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:14.895 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:14.895 Found net devices under 0000:18:00.0: mlx_0_0 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:14.895 Found net devices under 0000:18:00.1: mlx_0_1 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:14.895 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:15.154 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.154 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:15.154 altname enp24s0f0np0 00:14:15.154 altname ens785f0np0 00:14:15.154 inet 192.168.100.8/24 scope global mlx_0_0 00:14:15.154 valid_lft forever preferred_lft forever 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:15.154 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.154 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:15.154 altname enp24s0f1np1 00:14:15.154 altname ens785f1np1 00:14:15.154 inet 192.168.100.9/24 scope global mlx_0_1 00:14:15.154 valid_lft forever preferred_lft forever 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:15.154 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:15.155 192.168.100.9' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:15.155 192.168.100.9' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:15.155 192.168.100.9' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3595360 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3595360 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3595360 ']' 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:15.155 12:57:52 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.155 [2024-05-15 12:57:52.953034] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:14:15.155 [2024-05-15 12:57:52.953104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.155 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.155 [2024-05-15 12:57:53.024638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.414 [2024-05-15 12:57:53.104823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.414 [2024-05-15 12:57:53.104866] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.414 [2024-05-15 12:57:53.104875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.414 [2024-05-15 12:57:53.104899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.414 [2024-05-15 12:57:53.104907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.414 [2024-05-15 12:57:53.104941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:14:15.983 Unsupported transport: rdma 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@804 -- # type=--id 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # id=0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:15.983 nvmf_trace.0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # return 0 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.983 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:16.242 rmmod nvme_rdma 00:14:16.242 rmmod nvme_fabrics 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3595360 ']' 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3595360 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3595360 ']' 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3595360 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3595360 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3595360' 00:14:16.242 killing process with pid 3595360 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3595360 00:14:16.242 12:57:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3595360 00:14:16.501 12:57:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.501 12:57:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:16.501 00:14:16.501 real 0m7.481s 00:14:16.501 user 0m3.121s 00:14:16.501 sys 0m5.090s 00:14:16.501 12:57:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.501 12:57:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:16.501 ************************************ 00:14:16.501 END TEST nvmf_zcopy 00:14:16.501 ************************************ 00:14:16.501 12:57:54 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:16.502 12:57:54 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:16.502 12:57:54 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.502 12:57:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:16.502 ************************************ 00:14:16.502 START TEST nvmf_nmic 00:14:16.502 ************************************ 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:16.502 * Looking for test storage... 00:14:16.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.502 12:57:54 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:23.072 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.072 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:23.073 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:23.073 Found net devices under 0000:18:00.0: mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:23.073 Found net devices under 0000:18:00.1: mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:23.073 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.073 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:23.073 altname enp24s0f0np0 00:14:23.073 altname ens785f0np0 00:14:23.073 inet 192.168.100.8/24 scope global mlx_0_0 00:14:23.073 valid_lft forever preferred_lft forever 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:23.073 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.073 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:23.073 altname enp24s0f1np1 00:14:23.073 altname ens785f1np1 00:14:23.073 inet 192.168.100.9/24 scope global mlx_0_1 00:14:23.073 valid_lft forever preferred_lft forever 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:23.073 192.168.100.9' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:23.073 192.168.100.9' 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:23.073 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:23.073 192.168.100.9' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3598402 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3598402 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3598402 ']' 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.074 12:58:00 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.074 [2024-05-15 12:58:00.452102] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:14:23.074 [2024-05-15 12:58:00.452159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.074 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.074 [2024-05-15 12:58:00.529082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.074 [2024-05-15 12:58:00.625389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.074 [2024-05-15 12:58:00.625430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.074 [2024-05-15 12:58:00.625440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.074 [2024-05-15 12:58:00.625449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.074 [2024-05-15 12:58:00.625456] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.074 [2024-05-15 12:58:00.625523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.074 [2024-05-15 12:58:00.625610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.074 [2024-05-15 12:58:00.625686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.074 [2024-05-15 12:58:00.625688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 [2024-05-15 12:58:01.352026] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19a30f0/0x19a75e0) succeed. 00:14:23.642 [2024-05-15 12:58:01.362618] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19a4730/0x19e8c70) succeed. 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.642 Malloc0 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.642 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.901 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.901 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 [2024-05-15 12:58:01.540279] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.902 [2024-05-15 12:58:01.540689] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:23.902 test case1: single bdev can't be used in multiple subsystems 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 [2024-05-15 12:58:01.564401] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:23.902 [2024-05-15 12:58:01.564420] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:23.902 [2024-05-15 12:58:01.564430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.902 request: 00:14:23.902 { 00:14:23.902 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:23.902 "namespace": { 00:14:23.902 "bdev_name": "Malloc0", 00:14:23.902 "no_auto_visible": false 00:14:23.902 }, 00:14:23.902 "method": "nvmf_subsystem_add_ns", 00:14:23.902 "req_id": 1 00:14:23.902 } 00:14:23.902 Got JSON-RPC error response 00:14:23.902 response: 00:14:23.902 { 00:14:23.902 "code": -32602, 00:14:23.902 "message": "Invalid parameters" 00:14:23.902 } 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:23.902 Adding namespace failed - expected result. 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:23.902 test case2: host connect to nvmf target in multiple paths 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 [2024-05-15 12:58:01.580471] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 12:58:01 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:24.837 12:58:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:14:25.788 12:58:03 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.788 12:58:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:14:25.788 12:58:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.788 12:58:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:25.788 12:58:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:14:27.693 12:58:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:27.951 [global] 00:14:27.951 thread=1 00:14:27.951 invalidate=1 00:14:27.951 rw=write 00:14:27.951 time_based=1 00:14:27.951 runtime=1 00:14:27.951 ioengine=libaio 00:14:27.951 direct=1 00:14:27.951 bs=4096 00:14:27.951 iodepth=1 00:14:27.951 norandommap=0 00:14:27.951 numjobs=1 00:14:27.951 00:14:27.951 verify_dump=1 00:14:27.951 verify_backlog=512 00:14:27.951 verify_state_save=0 00:14:27.951 do_verify=1 00:14:27.951 verify=crc32c-intel 00:14:27.951 [job0] 00:14:27.951 filename=/dev/nvme0n1 00:14:27.951 Could not set queue depth (nvme0n1) 00:14:28.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:28.209 fio-3.35 00:14:28.209 Starting 1 thread 00:14:29.147 00:14:29.147 job0: (groupid=0, jobs=1): err= 0: pid=3599259: Wed May 15 12:58:06 2024 00:14:29.147 read: IOPS=6573, BW=25.7MiB/s (26.9MB/s)(25.7MiB/1001msec) 00:14:29.147 slat (nsec): min=8268, max=27085, avg=9017.81, stdev=897.36 00:14:29.147 clat (usec): min=50, max=239, avg=64.36, stdev= 5.29 00:14:29.147 lat (usec): min=59, max=248, avg=73.38, stdev= 5.38 00:14:29.147 clat percentiles (usec): 00:14:29.147 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 61], 00:14:29.147 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:14:29.147 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 73], 00:14:29.147 | 99.00th=[ 77], 99.50th=[ 79], 99.90th=[ 84], 99.95th=[ 89], 00:14:29.147 | 99.99th=[ 239] 00:14:29.147 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:14:29.147 slat (nsec): min=10335, max=42034, avg=11106.21, stdev=1154.70 00:14:29.147 clat (usec): min=45, max=271, avg=62.62, stdev= 5.56 00:14:29.147 lat (usec): min=60, max=282, avg=73.73, stdev= 5.65 00:14:29.147 clat percentiles (usec): 00:14:29.147 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:14:29.147 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:14:29.147 | 70.00th=[ 66], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:14:29.147 | 99.00th=[ 75], 99.50th=[ 78], 99.90th=[ 84], 99.95th=[ 88], 00:14:29.147 | 99.99th=[ 273] 00:14:29.147 bw ( KiB/s): min=28344, max=28344, per=100.00%, avg=28344.00, stdev= 0.00, samples=1 00:14:29.148 iops : min= 7086, max= 7086, avg=7086.00, stdev= 0.00, samples=1 00:14:29.148 lat (usec) : 50=0.04%, 100=99.93%, 250=0.02%, 500=0.01% 00:14:29.148 cpu : usr=6.20%, sys=15.30%, ctx=13236, majf=0, minf=1 00:14:29.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.148 issued rwts: total=6580,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.148 00:14:29.148 Run status group 0 (all jobs): 00:14:29.148 READ: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=25.7MiB (27.0MB), run=1001-1001msec 00:14:29.148 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:14:29.148 00:14:29.148 Disk stats (read/write): 00:14:29.148 nvme0n1: ios=5816/6144, merge=0/0, ticks=328/336, in_queue=664, util=90.88% 00:14:29.148 12:58:06 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:31.130 rmmod nvme_rdma 00:14:31.130 rmmod nvme_fabrics 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3598402 ']' 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3598402 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3598402 ']' 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3598402 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.130 12:58:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3598402 00:14:31.389 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:31.389 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:31.389 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3598402' 00:14:31.389 killing process with pid 3598402 00:14:31.389 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3598402 00:14:31.389 [2024-05-15 12:58:09.029778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:31.389 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3598402 00:14:31.389 [2024-05-15 12:58:09.121291] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:31.648 12:58:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.648 12:58:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:31.648 00:14:31.648 real 0m15.144s 00:14:31.648 user 0m38.893s 00:14:31.648 sys 0m5.629s 00:14:31.648 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.648 12:58:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 ************************************ 00:14:31.648 END TEST nvmf_nmic 00:14:31.648 ************************************ 00:14:31.648 12:58:09 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:31.648 12:58:09 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.648 12:58:09 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.648 12:58:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 ************************************ 00:14:31.648 START TEST nvmf_fio_target 00:14:31.648 ************************************ 00:14:31.648 12:58:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:31.907 * Looking for test storage... 00:14:31.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.907 12:58:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:38.484 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:38.484 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:38.484 Found net devices under 0000:18:00.0: mlx_0_0 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:38.484 Found net devices under 0000:18:00.1: mlx_0_1 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:38.484 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:38.485 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.485 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:38.485 altname enp24s0f0np0 00:14:38.485 altname ens785f0np0 00:14:38.485 inet 192.168.100.8/24 scope global mlx_0_0 00:14:38.485 valid_lft forever preferred_lft forever 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:38.485 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.485 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:38.485 altname enp24s0f1np1 00:14:38.485 altname ens785f1np1 00:14:38.485 inet 192.168.100.9/24 scope global mlx_0_1 00:14:38.485 valid_lft forever preferred_lft forever 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:38.485 192.168.100.9' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:38.485 192.168.100.9' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:38.485 192.168.100.9' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3602543 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3602543 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3602543 ']' 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.485 12:58:15 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.485 [2024-05-15 12:58:15.959738] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:14:38.485 [2024-05-15 12:58:15.959800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.485 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.485 [2024-05-15 12:58:16.030765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.485 [2024-05-15 12:58:16.111730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.485 [2024-05-15 12:58:16.111776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.485 [2024-05-15 12:58:16.111785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.485 [2024-05-15 12:58:16.111793] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.485 [2024-05-15 12:58:16.111804] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.486 [2024-05-15 12:58:16.111897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.486 [2024-05-15 12:58:16.111998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.486 [2024-05-15 12:58:16.112080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.486 [2024-05-15 12:58:16.112082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.053 12:58:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:39.313 [2024-05-15 12:58:17.012310] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed60f0/0x1eda5e0) succeed. 00:14:39.313 [2024-05-15 12:58:17.024098] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed7730/0x1f1bc70) succeed. 00:14:39.313 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:39.572 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:39.572 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:39.831 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:39.831 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.090 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:40.090 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.090 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:40.090 12:58:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:40.349 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.608 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:40.608 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.867 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:40.867 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.127 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:41.127 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:41.127 12:58:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:41.385 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:41.385 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.644 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:41.644 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.644 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:41.904 [2024-05-15 12:58:19.678811] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.904 [2024-05-15 12:58:19.679155] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.904 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:42.164 12:58:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:42.423 12:58:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:14:43.360 12:58:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:14:45.263 12:58:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:45.263 [global] 00:14:45.263 thread=1 00:14:45.263 invalidate=1 00:14:45.263 rw=write 00:14:45.263 time_based=1 00:14:45.263 runtime=1 00:14:45.263 ioengine=libaio 00:14:45.263 direct=1 00:14:45.263 bs=4096 00:14:45.263 iodepth=1 00:14:45.263 norandommap=0 00:14:45.263 numjobs=1 00:14:45.263 00:14:45.263 verify_dump=1 00:14:45.263 verify_backlog=512 00:14:45.263 verify_state_save=0 00:14:45.263 do_verify=1 00:14:45.263 verify=crc32c-intel 00:14:45.263 [job0] 00:14:45.263 filename=/dev/nvme0n1 00:14:45.263 [job1] 00:14:45.263 filename=/dev/nvme0n2 00:14:45.263 [job2] 00:14:45.263 filename=/dev/nvme0n3 00:14:45.263 [job3] 00:14:45.263 filename=/dev/nvme0n4 00:14:45.520 Could not set queue depth (nvme0n1) 00:14:45.520 Could not set queue depth (nvme0n2) 00:14:45.520 Could not set queue depth (nvme0n3) 00:14:45.520 Could not set queue depth (nvme0n4) 00:14:45.778 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:45.778 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:45.778 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:45.778 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:45.778 fio-3.35 00:14:45.778 Starting 4 threads 00:14:47.148 00:14:47.148 job0: (groupid=0, jobs=1): err= 0: pid=3603671: Wed May 15 12:58:24 2024 00:14:47.148 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:47.148 slat (nsec): min=8289, max=21073, avg=9095.64, stdev=934.15 00:14:47.149 clat (usec): min=75, max=396, avg=146.60, stdev=25.90 00:14:47.149 lat (usec): min=84, max=405, avg=155.70, stdev=25.92 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 91], 5.00th=[ 101], 10.00th=[ 110], 20.00th=[ 131], 00:14:47.149 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:14:47.149 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 182], 95.00th=[ 198], 00:14:47.149 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 221], 99.95th=[ 225], 00:14:47.149 | 99.99th=[ 396] 00:14:47.149 write: IOPS=3441, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1001msec); 0 zone resets 00:14:47.149 slat (nsec): min=10330, max=38104, avg=11263.45, stdev=1179.65 00:14:47.149 clat (usec): min=71, max=406, avg=136.34, stdev=26.01 00:14:47.149 lat (usec): min=81, max=417, avg=147.60, stdev=26.04 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 81], 5.00th=[ 91], 10.00th=[ 100], 20.00th=[ 121], 00:14:47.149 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:14:47.149 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 176], 95.00th=[ 186], 00:14:47.149 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 215], 99.95th=[ 221], 00:14:47.149 | 99.99th=[ 408] 00:14:47.149 bw ( KiB/s): min=14552, max=14552, per=21.46%, avg=14552.00, stdev= 0.00, samples=1 00:14:47.149 iops : min= 3638, max= 3638, avg=3638.00, stdev= 0.00, samples=1 00:14:47.149 lat (usec) : 100=7.67%, 250=92.30%, 500=0.03% 00:14:47.149 cpu : usr=2.90%, sys=8.00%, ctx=6518, majf=0, minf=1 00:14:47.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 issued rwts: total=3072,3445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.149 job1: (groupid=0, jobs=1): err= 0: pid=3603685: Wed May 15 12:58:24 2024 00:14:47.149 read: IOPS=5039, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1002msec) 00:14:47.149 slat (nsec): min=8314, max=33521, avg=9044.37, stdev=1000.81 00:14:47.149 clat (usec): min=65, max=177, avg=87.76, stdev= 8.79 00:14:47.149 lat (usec): min=74, max=210, avg=96.80, stdev= 8.90 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:14:47.149 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 88], 00:14:47.149 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 98], 95.00th=[ 103], 00:14:47.149 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 135], 99.95th=[ 139], 00:14:47.149 | 99.99th=[ 178] 00:14:47.149 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:14:47.149 slat (nsec): min=10391, max=38416, avg=11254.14, stdev=1265.19 00:14:47.149 clat (usec): min=66, max=161, avg=84.60, stdev= 9.26 00:14:47.149 lat (usec): min=77, max=172, avg=95.85, stdev= 9.37 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 00:14:47.149 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:14:47.149 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 101], 00:14:47.149 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 135], 99.95th=[ 137], 00:14:47.149 | 99.99th=[ 161] 00:14:47.149 bw ( KiB/s): min=20480, max=20480, per=30.20%, avg=20480.00, stdev= 0.00, samples=1 00:14:47.149 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:47.149 lat (usec) : 100=93.49%, 250=6.51% 00:14:47.149 cpu : usr=4.50%, sys=12.19%, ctx=10171, majf=0, minf=1 00:14:47.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 issued rwts: total=5050,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.149 job2: (groupid=0, jobs=1): err= 0: pid=3603700: Wed May 15 12:58:24 2024 00:14:47.149 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:14:47.149 slat (nsec): min=8627, max=34587, avg=9838.02, stdev=1078.66 00:14:47.149 clat (usec): min=71, max=183, avg=92.40, stdev= 7.59 00:14:47.149 lat (usec): min=80, max=192, avg=102.23, stdev= 7.63 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:14:47.149 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:14:47.149 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 101], 95.00th=[ 105], 00:14:47.149 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 137], 99.95th=[ 141], 00:14:47.149 | 99.99th=[ 184] 00:14:47.149 write: IOPS=4976, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1001msec); 0 zone resets 00:14:47.149 slat (nsec): min=10755, max=46200, avg=12318.16, stdev=1428.68 00:14:47.149 clat (usec): min=71, max=376, avg=88.87, stdev= 9.08 00:14:47.149 lat (usec): min=83, max=388, avg=101.19, stdev= 9.17 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:14:47.149 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 89], 00:14:47.149 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 98], 95.00th=[ 103], 00:14:47.149 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 133], 99.95th=[ 172], 00:14:47.149 | 99.99th=[ 375] 00:14:47.149 bw ( KiB/s): min=20480, max=20480, per=30.20%, avg=20480.00, stdev= 0.00, samples=1 00:14:47.149 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:47.149 lat (usec) : 100=90.83%, 250=9.16%, 500=0.01% 00:14:47.149 cpu : usr=5.80%, sys=11.20%, ctx=9589, majf=0, minf=1 00:14:47.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 issued rwts: total=4608,4981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.149 job3: (groupid=0, jobs=1): err= 0: pid=3603705: Wed May 15 12:58:24 2024 00:14:47.149 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:47.149 slat (nsec): min=8539, max=29127, avg=9399.74, stdev=965.41 00:14:47.149 clat (usec): min=82, max=395, avg=146.31, stdev=20.11 00:14:47.149 lat (usec): min=90, max=404, avg=155.71, stdev=20.15 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 98], 5.00th=[ 109], 10.00th=[ 124], 20.00th=[ 135], 00:14:47.149 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:14:47.149 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 184], 00:14:47.149 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 210], 99.95th=[ 227], 00:14:47.149 | 99.99th=[ 396] 00:14:47.149 write: IOPS=3439, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1001msec); 0 zone resets 00:14:47.149 slat (nsec): min=10510, max=44482, avg=11609.07, stdev=1374.60 00:14:47.149 clat (usec): min=80, max=400, avg=136.00, stdev=19.09 00:14:47.149 lat (usec): min=91, max=412, avg=147.61, stdev=19.10 00:14:47.149 clat percentiles (usec): 00:14:47.149 | 1.00th=[ 90], 5.00th=[ 101], 10.00th=[ 116], 20.00th=[ 124], 00:14:47.149 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:14:47.149 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 172], 00:14:47.149 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 196], 99.95th=[ 202], 00:14:47.149 | 99.99th=[ 400] 00:14:47.149 bw ( KiB/s): min=14560, max=14560, per=21.47%, avg=14560.00, stdev= 0.00, samples=1 00:14:47.149 iops : min= 3640, max= 3640, avg=3640.00, stdev= 0.00, samples=1 00:14:47.149 lat (usec) : 100=2.98%, 250=96.99%, 500=0.03% 00:14:47.149 cpu : usr=3.80%, sys=7.40%, ctx=6515, majf=0, minf=1 00:14:47.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.149 issued rwts: total=3072,3443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.149 00:14:47.149 Run status group 0 (all jobs): 00:14:47.149 READ: bw=61.6MiB/s (64.6MB/s), 12.0MiB/s-19.7MiB/s (12.6MB/s-20.6MB/s), io=61.7MiB (64.7MB), run=1001-1002msec 00:14:47.149 WRITE: bw=66.2MiB/s (69.4MB/s), 13.4MiB/s-20.0MiB/s (14.1MB/s-20.9MB/s), io=66.4MiB (69.6MB), run=1001-1002msec 00:14:47.149 00:14:47.149 Disk stats (read/write): 00:14:47.149 nvme0n1: ios=2610/2959, merge=0/0, ticks=377/376, in_queue=753, util=85.17% 00:14:47.149 nvme0n2: ios=4096/4408, merge=0/0, ticks=342/357, in_queue=699, util=86.37% 00:14:47.149 nvme0n3: ios=3955/4096, merge=0/0, ticks=351/334, in_queue=685, util=88.74% 00:14:47.149 nvme0n4: ios=2560/2960, merge=0/0, ticks=369/372, in_queue=741, util=89.59% 00:14:47.149 12:58:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:47.149 [global] 00:14:47.149 thread=1 00:14:47.149 invalidate=1 00:14:47.149 rw=randwrite 00:14:47.149 time_based=1 00:14:47.149 runtime=1 00:14:47.149 ioengine=libaio 00:14:47.149 direct=1 00:14:47.149 bs=4096 00:14:47.149 iodepth=1 00:14:47.149 norandommap=0 00:14:47.149 numjobs=1 00:14:47.149 00:14:47.149 verify_dump=1 00:14:47.149 verify_backlog=512 00:14:47.149 verify_state_save=0 00:14:47.149 do_verify=1 00:14:47.149 verify=crc32c-intel 00:14:47.149 [job0] 00:14:47.149 filename=/dev/nvme0n1 00:14:47.149 [job1] 00:14:47.149 filename=/dev/nvme0n2 00:14:47.149 [job2] 00:14:47.149 filename=/dev/nvme0n3 00:14:47.149 [job3] 00:14:47.149 filename=/dev/nvme0n4 00:14:47.149 Could not set queue depth (nvme0n1) 00:14:47.149 Could not set queue depth (nvme0n2) 00:14:47.149 Could not set queue depth (nvme0n3) 00:14:47.149 Could not set queue depth (nvme0n4) 00:14:47.149 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:47.149 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:47.149 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:47.149 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:47.149 fio-3.35 00:14:47.149 Starting 4 threads 00:14:48.522 00:14:48.522 job0: (groupid=0, jobs=1): err= 0: pid=3604077: Wed May 15 12:58:26 2024 00:14:48.522 read: IOPS=4728, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec) 00:14:48.522 slat (nsec): min=8603, max=32398, avg=9712.77, stdev=1030.09 00:14:48.522 clat (usec): min=69, max=196, avg=89.62, stdev= 6.60 00:14:48.522 lat (usec): min=81, max=205, avg=99.33, stdev= 6.70 00:14:48.522 clat percentiles (usec): 00:14:48.522 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:14:48.522 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:14:48.522 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 98], 95.00th=[ 101], 00:14:48.522 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 127], 99.95th=[ 131], 00:14:48.522 | 99.99th=[ 198] 00:14:48.522 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:48.522 slat (nsec): min=10305, max=68230, avg=12121.64, stdev=1639.00 00:14:48.522 clat (usec): min=67, max=183, avg=85.96, stdev= 9.11 00:14:48.522 lat (usec): min=79, max=194, avg=98.08, stdev= 9.15 00:14:48.522 clat percentiles (usec): 00:14:48.522 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:14:48.522 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 86], 00:14:48.522 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 95], 95.00th=[ 100], 00:14:48.522 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 137], 99.95th=[ 145], 00:14:48.522 | 99.99th=[ 184] 00:14:48.522 bw ( KiB/s): min=20480, max=20480, per=27.53%, avg=20480.00, stdev= 0.00, samples=1 00:14:48.522 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:48.522 lat (usec) : 100=94.35%, 250=5.65% 00:14:48.522 cpu : usr=7.20%, sys=10.40%, ctx=9853, majf=0, minf=1 00:14:48.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.522 issued rwts: total=4733,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.522 job1: (groupid=0, jobs=1): err= 0: pid=3604092: Wed May 15 12:58:26 2024 00:14:48.523 read: IOPS=4561, BW=17.8MiB/s (18.7MB/s)(17.8MiB/1000msec) 00:14:48.523 slat (nsec): min=8300, max=32132, avg=9131.28, stdev=1188.12 00:14:48.523 clat (usec): min=73, max=217, avg=100.01, stdev=16.70 00:14:48.523 lat (usec): min=81, max=227, avg=109.14, stdev=16.95 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:14:48.523 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:14:48.523 | 70.00th=[ 101], 80.00th=[ 118], 90.00th=[ 129], 95.00th=[ 133], 00:14:48.523 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 174], 99.95th=[ 182], 00:14:48.523 | 99.99th=[ 219] 00:14:48.523 write: IOPS=4608, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1000msec); 0 zone resets 00:14:48.523 slat (nsec): min=8792, max=39687, avg=11084.47, stdev=1354.66 00:14:48.523 clat (usec): min=69, max=176, avg=93.82, stdev=15.14 00:14:48.523 lat (usec): min=80, max=187, avg=104.90, stdev=15.36 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:14:48.523 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:14:48.523 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 126], 00:14:48.523 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 159], 99.95th=[ 163], 00:14:48.523 | 99.99th=[ 178] 00:14:48.523 bw ( KiB/s): min=20480, max=20480, per=27.53%, avg=20480.00, stdev= 0.00, samples=1 00:14:48.523 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:48.523 lat (usec) : 100=72.12%, 250=27.88% 00:14:48.523 cpu : usr=4.40%, sys=10.60%, ctx=9169, majf=0, minf=1 00:14:48.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 issued rwts: total=4561,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.523 job2: (groupid=0, jobs=1): err= 0: pid=3604109: Wed May 15 12:58:26 2024 00:14:48.523 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:14:48.523 slat (nsec): min=8328, max=28077, avg=9099.92, stdev=1024.40 00:14:48.523 clat (usec): min=85, max=360, avg=109.57, stdev=10.01 00:14:48.523 lat (usec): min=94, max=369, avg=118.67, stdev=10.09 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 102], 00:14:48.523 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 112], 00:14:48.523 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 126], 00:14:48.523 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 143], 99.95th=[ 145], 00:14:48.523 | 99.99th=[ 363] 00:14:48.523 write: IOPS=4291, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1002msec); 0 zone resets 00:14:48.523 slat (nsec): min=10504, max=42837, avg=11146.92, stdev=1201.69 00:14:48.523 clat (usec): min=83, max=313, avg=104.30, stdev= 9.82 00:14:48.523 lat (usec): min=93, max=323, avg=115.44, stdev= 9.92 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 97], 00:14:48.523 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 106], 00:14:48.523 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:14:48.523 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 141], 99.95th=[ 143], 00:14:48.523 | 99.99th=[ 314] 00:14:48.523 bw ( KiB/s): min=16384, max=18016, per=23.12%, avg=17200.00, stdev=1154.00, samples=2 00:14:48.523 iops : min= 4096, max= 4504, avg=4300.00, stdev=288.50, samples=2 00:14:48.523 lat (usec) : 100=25.66%, 250=74.32%, 500=0.02% 00:14:48.523 cpu : usr=4.30%, sys=9.69%, ctx=8397, majf=0, minf=1 00:14:48.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 issued rwts: total=4096,4300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.523 job3: (groupid=0, jobs=1): err= 0: pid=3604110: Wed May 15 12:58:26 2024 00:14:48.523 read: IOPS=4439, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1001msec) 00:14:48.523 slat (nsec): min=8420, max=28597, avg=9012.96, stdev=963.14 00:14:48.523 clat (usec): min=76, max=179, avg=99.98, stdev=16.10 00:14:48.523 lat (usec): min=86, max=188, avg=109.00, stdev=16.21 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:14:48.523 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 00:14:48.523 | 70.00th=[ 102], 80.00th=[ 118], 90.00th=[ 127], 95.00th=[ 133], 00:14:48.523 | 99.00th=[ 143], 99.50th=[ 147], 99.90th=[ 172], 99.95th=[ 178], 00:14:48.523 | 99.99th=[ 180] 00:14:48.523 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:14:48.523 slat (nsec): min=10439, max=39579, avg=11118.39, stdev=1192.95 00:14:48.523 clat (usec): min=72, max=375, avg=96.76, stdev=16.51 00:14:48.523 lat (usec): min=83, max=386, avg=107.88, stdev=16.57 00:14:48.523 clat percentiles (usec): 00:14:48.523 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:14:48.523 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 94], 00:14:48.523 | 70.00th=[ 101], 80.00th=[ 115], 90.00th=[ 123], 95.00th=[ 128], 00:14:48.523 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 161], 99.95th=[ 163], 00:14:48.523 | 99.99th=[ 375] 00:14:48.523 bw ( KiB/s): min=20480, max=20480, per=27.53%, avg=20480.00, stdev= 0.00, samples=1 00:14:48.523 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:48.523 lat (usec) : 100=68.56%, 250=31.43%, 500=0.01% 00:14:48.523 cpu : usr=4.90%, sys=10.00%, ctx=9052, majf=0, minf=1 00:14:48.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.523 issued rwts: total=4444,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.523 00:14:48.523 Run status group 0 (all jobs): 00:14:48.523 READ: bw=69.5MiB/s (72.9MB/s), 16.0MiB/s-18.5MiB/s (16.7MB/s-19.4MB/s), io=69.7MiB (73.0MB), run=1000-1002msec 00:14:48.523 WRITE: bw=72.7MiB/s (76.2MB/s), 16.8MiB/s-20.0MiB/s (17.6MB/s-20.9MB/s), io=72.8MiB (76.3MB), run=1000-1002msec 00:14:48.523 00:14:48.523 Disk stats (read/write): 00:14:48.523 nvme0n1: ios=4146/4247, merge=0/0, ticks=353/334, in_queue=687, util=85.67% 00:14:48.523 nvme0n2: ios=3699/4096, merge=0/0, ticks=357/348, in_queue=705, util=86.27% 00:14:48.523 nvme0n3: ios=3484/3584, merge=0/0, ticks=369/356, in_queue=725, util=88.70% 00:14:48.523 nvme0n4: ios=3584/4087, merge=0/0, ticks=334/355, in_queue=689, util=89.55% 00:14:48.523 12:58:26 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:48.523 [global] 00:14:48.523 thread=1 00:14:48.523 invalidate=1 00:14:48.523 rw=write 00:14:48.523 time_based=1 00:14:48.523 runtime=1 00:14:48.523 ioengine=libaio 00:14:48.523 direct=1 00:14:48.523 bs=4096 00:14:48.523 iodepth=128 00:14:48.523 norandommap=0 00:14:48.523 numjobs=1 00:14:48.523 00:14:48.523 verify_dump=1 00:14:48.523 verify_backlog=512 00:14:48.523 verify_state_save=0 00:14:48.523 do_verify=1 00:14:48.523 verify=crc32c-intel 00:14:48.523 [job0] 00:14:48.523 filename=/dev/nvme0n1 00:14:48.523 [job1] 00:14:48.523 filename=/dev/nvme0n2 00:14:48.523 [job2] 00:14:48.523 filename=/dev/nvme0n3 00:14:48.523 [job3] 00:14:48.523 filename=/dev/nvme0n4 00:14:48.523 Could not set queue depth (nvme0n1) 00:14:48.523 Could not set queue depth (nvme0n2) 00:14:48.523 Could not set queue depth (nvme0n3) 00:14:48.523 Could not set queue depth (nvme0n4) 00:14:48.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:48.781 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:48.781 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:48.781 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:48.781 fio-3.35 00:14:48.781 Starting 4 threads 00:14:50.154 00:14:50.154 job0: (groupid=0, jobs=1): err= 0: pid=3604400: Wed May 15 12:58:27 2024 00:14:50.154 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:14:50.154 slat (usec): min=2, max=6796, avg=80.42, stdev=394.93 00:14:50.154 clat (usec): min=3074, max=20817, avg=10116.39, stdev=3602.40 00:14:50.154 lat (usec): min=3127, max=21145, avg=10196.81, stdev=3623.65 00:14:50.154 clat percentiles (usec): 00:14:50.154 | 1.00th=[ 3916], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6652], 00:14:50.154 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10945], 00:14:50.154 | 70.00th=[12125], 80.00th=[13435], 90.00th=[15270], 95.00th=[16712], 00:14:50.154 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20841], 99.95th=[20841], 00:14:50.154 | 99.99th=[20841] 00:14:50.154 write: IOPS=6522, BW=25.5MiB/s (26.7MB/s)(25.5MiB/1002msec); 0 zone resets 00:14:50.154 slat (usec): min=2, max=6653, avg=73.77, stdev=373.06 00:14:50.154 clat (usec): min=929, max=20985, avg=9914.76, stdev=4014.27 00:14:50.154 lat (usec): min=1380, max=22186, avg=9988.54, stdev=4037.66 00:14:50.154 clat percentiles (usec): 00:14:50.154 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 5473], 20.00th=[ 6325], 00:14:50.154 | 30.00th=[ 7242], 40.00th=[ 8160], 50.00th=[ 9241], 60.00th=[10421], 00:14:50.154 | 70.00th=[11731], 80.00th=[13173], 90.00th=[16581], 95.00th=[17957], 00:14:50.154 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:14:50.154 | 99.99th=[21103] 00:14:50.154 bw ( KiB/s): min=24576, max=24576, per=26.09%, avg=24576.00, stdev= 0.00, samples=1 00:14:50.154 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:50.154 lat (usec) : 1000=0.01% 00:14:50.154 lat (msec) : 2=0.18%, 4=1.81%, 10=52.97%, 20=44.75%, 50=0.28% 00:14:50.154 cpu : usr=3.50%, sys=4.80%, ctx=1269, majf=0, minf=1 00:14:50.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:50.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.154 issued rwts: total=6144,6536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.154 job1: (groupid=0, jobs=1): err= 0: pid=3604401: Wed May 15 12:58:27 2024 00:14:50.154 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:14:50.154 slat (usec): min=2, max=5330, avg=73.92, stdev=370.59 00:14:50.154 clat (usec): min=2836, max=20682, avg=9924.12, stdev=3138.72 00:14:50.154 lat (usec): min=2838, max=20689, avg=9998.04, stdev=3150.57 00:14:50.154 clat percentiles (usec): 00:14:50.154 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5735], 20.00th=[ 7111], 00:14:50.154 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10552], 00:14:50.154 | 70.00th=[11469], 80.00th=[12518], 90.00th=[13960], 95.00th=[15795], 00:14:50.154 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20579], 99.95th=[20579], 00:14:50.154 | 99.99th=[20579] 00:14:50.154 write: IOPS=6725, BW=26.3MiB/s (27.5MB/s)(26.3MiB/1001msec); 0 zone resets 00:14:50.154 slat (usec): min=2, max=5948, avg=71.80, stdev=340.39 00:14:50.154 clat (usec): min=694, max=18802, avg=9023.49, stdev=2710.17 00:14:50.154 lat (usec): min=1830, max=18805, avg=9095.28, stdev=2724.71 00:14:50.154 clat percentiles (usec): 00:14:50.154 | 1.00th=[ 3687], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 6718], 00:14:50.154 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9372], 00:14:50.154 | 70.00th=[10552], 80.00th=[11600], 90.00th=[12649], 95.00th=[13829], 00:14:50.154 | 99.00th=[15664], 99.50th=[16909], 99.90th=[18744], 99.95th=[18744], 00:14:50.154 | 99.99th=[18744] 00:14:50.154 bw ( KiB/s): min=24576, max=24576, per=26.09%, avg=24576.00, stdev= 0.00, samples=1 00:14:50.154 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:50.154 lat (usec) : 750=0.01% 00:14:50.154 lat (msec) : 2=0.06%, 4=1.56%, 10=57.31%, 20=40.95%, 50=0.11% 00:14:50.154 cpu : usr=2.30%, sys=6.20%, ctx=1266, majf=0, minf=1 00:14:50.154 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:50.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.154 issued rwts: total=6656,6732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.154 job2: (groupid=0, jobs=1): err= 0: pid=3604402: Wed May 15 12:58:27 2024 00:14:50.155 read: IOPS=5493, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1001msec) 00:14:50.155 slat (usec): min=2, max=5360, avg=83.86, stdev=402.53 00:14:50.155 clat (usec): min=473, max=23595, avg=10769.15, stdev=3438.73 00:14:50.155 lat (usec): min=1667, max=23597, avg=10853.01, stdev=3454.02 00:14:50.155 clat percentiles (usec): 00:14:50.155 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 7701], 00:14:50.155 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11338], 00:14:50.155 | 70.00th=[12256], 80.00th=[13698], 90.00th=[15008], 95.00th=[17433], 00:14:50.155 | 99.00th=[19268], 99.50th=[19530], 99.90th=[23462], 99.95th=[23462], 00:14:50.155 | 99.99th=[23725] 00:14:50.155 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:14:50.155 slat (usec): min=2, max=6324, avg=91.80, stdev=443.49 00:14:50.155 clat (usec): min=3539, max=22796, avg=11916.30, stdev=3993.84 00:14:50.155 lat (usec): min=3542, max=22810, avg=12008.10, stdev=4007.40 00:14:50.155 clat percentiles (usec): 00:14:50.155 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6980], 20.00th=[ 8291], 00:14:50.155 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11600], 60.00th=[12649], 00:14:50.155 | 70.00th=[13698], 80.00th=[15008], 90.00th=[17171], 95.00th=[19530], 00:14:50.155 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22414], 99.95th=[22414], 00:14:50.155 | 99.99th=[22676] 00:14:50.155 bw ( KiB/s): min=24576, max=24576, per=26.09%, avg=24576.00, stdev= 0.00, samples=1 00:14:50.155 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:50.155 lat (usec) : 500=0.01% 00:14:50.155 lat (msec) : 2=0.09%, 4=0.37%, 10=38.11%, 20=59.00%, 50=2.43% 00:14:50.155 cpu : usr=2.30%, sys=5.10%, ctx=1132, majf=0, minf=1 00:14:50.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:50.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.155 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.155 job3: (groupid=0, jobs=1): err= 0: pid=3604403: Wed May 15 12:58:27 2024 00:14:50.155 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:14:50.155 slat (usec): min=2, max=6256, avg=103.49, stdev=468.55 00:14:50.155 clat (usec): min=4634, max=24664, avg=13246.61, stdev=3627.13 00:14:50.155 lat (usec): min=4642, max=24999, avg=13350.10, stdev=3642.60 00:14:50.155 clat percentiles (usec): 00:14:50.155 | 1.00th=[ 6325], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[10028], 00:14:50.155 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13042], 60.00th=[13698], 00:14:50.155 | 70.00th=[15008], 80.00th=[16581], 90.00th=[17957], 95.00th=[19268], 00:14:50.155 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23462], 99.95th=[24773], 00:14:50.155 | 99.99th=[24773] 00:14:50.155 write: IOPS=4685, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1002msec); 0 zone resets 00:14:50.155 slat (usec): min=2, max=5784, avg=107.33, stdev=478.91 00:14:50.155 clat (usec): min=987, max=25992, avg=13972.15, stdev=4251.27 00:14:50.155 lat (usec): min=1795, max=26001, avg=14079.48, stdev=4266.84 00:14:50.155 clat percentiles (usec): 00:14:50.155 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[10290], 00:14:50.155 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13435], 60.00th=[14615], 00:14:50.155 | 70.00th=[15795], 80.00th=[18220], 90.00th=[20579], 95.00th=[21103], 00:14:50.155 | 99.00th=[22938], 99.50th=[23200], 99.90th=[25297], 99.95th=[25560], 00:14:50.155 | 99.99th=[26084] 00:14:50.155 bw ( KiB/s): min=16384, max=16384, per=17.39%, avg=16384.00, stdev= 0.00, samples=1 00:14:50.155 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:50.155 lat (usec) : 1000=0.01% 00:14:50.155 lat (msec) : 2=0.10%, 4=0.12%, 10=18.06%, 20=73.65%, 50=8.06% 00:14:50.155 cpu : usr=2.30%, sys=4.40%, ctx=1016, majf=0, minf=1 00:14:50.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:50.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.155 issued rwts: total=4608,4695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.155 00:14:50.155 Run status group 0 (all jobs): 00:14:50.155 READ: bw=89.3MiB/s (93.6MB/s), 18.0MiB/s-26.0MiB/s (18.8MB/s-27.2MB/s), io=89.5MiB (93.8MB), run=1001-1002msec 00:14:50.155 WRITE: bw=92.0MiB/s (96.5MB/s), 18.3MiB/s-26.3MiB/s (19.2MB/s-27.5MB/s), io=92.2MiB (96.6MB), run=1001-1002msec 00:14:50.155 00:14:50.155 Disk stats (read/write): 00:14:50.155 nvme0n1: ios=5170/5238, merge=0/0, ticks=17887/17506, in_queue=35393, util=84.37% 00:14:50.155 nvme0n2: ios=5632/5709, merge=0/0, ticks=18506/17404, in_queue=35910, util=85.34% 00:14:50.155 nvme0n3: ios=4608/4835, merge=0/0, ticks=16771/16919, in_queue=33690, util=88.60% 00:14:50.155 nvme0n4: ios=3601/4096, merge=0/0, ticks=14609/15619, in_queue=30228, util=88.92% 00:14:50.155 12:58:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:50.155 [global] 00:14:50.155 thread=1 00:14:50.155 invalidate=1 00:14:50.155 rw=randwrite 00:14:50.155 time_based=1 00:14:50.155 runtime=1 00:14:50.155 ioengine=libaio 00:14:50.155 direct=1 00:14:50.155 bs=4096 00:14:50.155 iodepth=128 00:14:50.155 norandommap=0 00:14:50.155 numjobs=1 00:14:50.155 00:14:50.155 verify_dump=1 00:14:50.155 verify_backlog=512 00:14:50.155 verify_state_save=0 00:14:50.155 do_verify=1 00:14:50.155 verify=crc32c-intel 00:14:50.155 [job0] 00:14:50.155 filename=/dev/nvme0n1 00:14:50.155 [job1] 00:14:50.155 filename=/dev/nvme0n2 00:14:50.155 [job2] 00:14:50.155 filename=/dev/nvme0n3 00:14:50.155 [job3] 00:14:50.155 filename=/dev/nvme0n4 00:14:50.155 Could not set queue depth (nvme0n1) 00:14:50.155 Could not set queue depth (nvme0n2) 00:14:50.155 Could not set queue depth (nvme0n3) 00:14:50.155 Could not set queue depth (nvme0n4) 00:14:50.413 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:50.413 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:50.413 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:50.413 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:50.413 fio-3.35 00:14:50.413 Starting 4 threads 00:14:51.787 00:14:51.787 job0: (groupid=0, jobs=1): err= 0: pid=3604708: Wed May 15 12:58:29 2024 00:14:51.787 read: IOPS=6336, BW=24.8MiB/s (26.0MB/s)(24.8MiB/1003msec) 00:14:51.787 slat (usec): min=2, max=4186, avg=78.42, stdev=331.62 00:14:51.787 clat (usec): min=1096, max=19930, avg=9797.64, stdev=3137.50 00:14:51.787 lat (usec): min=3295, max=19933, avg=9876.06, stdev=3153.94 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7046], 00:14:51.787 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9896], 00:14:51.787 | 70.00th=[10945], 80.00th=[12780], 90.00th=[14615], 95.00th=[15270], 00:14:51.787 | 99.00th=[18482], 99.50th=[19530], 99.90th=[19530], 99.95th=[19792], 00:14:51.787 | 99.99th=[20055] 00:14:51.787 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:14:51.787 slat (usec): min=2, max=4782, avg=71.80, stdev=316.89 00:14:51.787 clat (usec): min=2822, max=20641, avg=9710.56, stdev=3163.41 00:14:51.787 lat (usec): min=2825, max=20646, avg=9782.36, stdev=3177.83 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 6915], 00:14:51.787 | 30.00th=[ 7439], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[10028], 00:14:51.787 | 70.00th=[11207], 80.00th=[12649], 90.00th=[13829], 95.00th=[14746], 00:14:51.787 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:14:51.787 | 99.99th=[20579] 00:14:51.787 bw ( KiB/s): min=24576, max=28672, per=25.98%, avg=26624.00, stdev=2896.31, samples=2 00:14:51.787 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:14:51.787 lat (msec) : 2=0.01%, 4=0.25%, 10=60.81%, 20=38.47%, 50=0.45% 00:14:51.787 cpu : usr=2.89%, sys=5.19%, ctx=1059, majf=0, minf=1 00:14:51.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:51.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.787 issued rwts: total=6356,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.787 job1: (groupid=0, jobs=1): err= 0: pid=3604709: Wed May 15 12:58:29 2024 00:14:51.787 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:14:51.787 slat (usec): min=2, max=4689, avg=70.54, stdev=334.37 00:14:51.787 clat (usec): min=2862, max=16198, avg=9318.70, stdev=2807.23 00:14:51.787 lat (usec): min=2946, max=16204, avg=9389.24, stdev=2818.45 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6783], 00:14:51.787 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9765], 00:14:51.787 | 70.00th=[10683], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:14:51.787 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16188], 99.95th=[16188], 00:14:51.787 | 99.99th=[16188] 00:14:51.787 write: IOPS=7243, BW=28.3MiB/s (29.7MB/s)(28.4MiB/1003msec); 0 zone resets 00:14:51.787 slat (usec): min=2, max=4267, avg=64.26, stdev=290.12 00:14:51.787 clat (usec): min=2359, max=14362, avg=8261.78, stdev=2488.00 00:14:51.787 lat (usec): min=2368, max=14365, avg=8326.04, stdev=2497.97 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 4015], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 5932], 00:14:51.787 | 30.00th=[ 6521], 40.00th=[ 7308], 50.00th=[ 7963], 60.00th=[ 8586], 00:14:51.787 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11863], 95.00th=[13042], 00:14:51.787 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14353], 99.95th=[14353], 00:14:51.787 | 99.99th=[14353] 00:14:51.787 bw ( KiB/s): min=28072, max=29272, per=27.98%, avg=28672.00, stdev=848.53, samples=2 00:14:51.787 iops : min= 7018, max= 7318, avg=7168.00, stdev=212.13, samples=2 00:14:51.787 lat (msec) : 4=0.58%, 10=68.49%, 20=30.93% 00:14:51.787 cpu : usr=3.49%, sys=5.79%, ctx=1236, majf=0, minf=1 00:14:51.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:51.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.787 issued rwts: total=7168,7265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.787 job2: (groupid=0, jobs=1): err= 0: pid=3604710: Wed May 15 12:58:29 2024 00:14:51.787 read: IOPS=5798, BW=22.6MiB/s (23.8MB/s)(22.7MiB/1002msec) 00:14:51.787 slat (usec): min=2, max=5172, avg=82.05, stdev=389.52 00:14:51.787 clat (usec): min=1093, max=21815, avg=10606.79, stdev=3306.50 00:14:51.787 lat (usec): min=1101, max=21821, avg=10688.84, stdev=3320.43 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 4228], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 7898], 00:14:51.787 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10814], 00:14:51.787 | 70.00th=[11863], 80.00th=[12911], 90.00th=[15533], 95.00th=[17433], 00:14:51.787 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21890], 99.95th=[21890], 00:14:51.787 | 99.99th=[21890] 00:14:51.787 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:14:51.787 slat (usec): min=2, max=4236, avg=81.33, stdev=352.39 00:14:51.787 clat (usec): min=3652, max=22228, avg=10597.79, stdev=3327.21 00:14:51.787 lat (usec): min=3734, max=22233, avg=10679.12, stdev=3346.46 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 7832], 00:14:51.787 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10552], 00:14:51.787 | 70.00th=[11863], 80.00th=[13042], 90.00th=[16057], 95.00th=[17433], 00:14:51.787 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152], 00:14:51.787 | 99.99th=[22152] 00:14:51.787 bw ( KiB/s): min=21744, max=27408, per=23.98%, avg=24576.00, stdev=4005.05, samples=2 00:14:51.787 iops : min= 5436, max= 6852, avg=6144.00, stdev=1001.26, samples=2 00:14:51.787 lat (msec) : 2=0.08%, 4=0.15%, 10=53.05%, 20=45.88%, 50=0.84% 00:14:51.787 cpu : usr=2.30%, sys=5.59%, ctx=923, majf=0, minf=1 00:14:51.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:51.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.787 issued rwts: total=5810,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.787 job3: (groupid=0, jobs=1): err= 0: pid=3604711: Wed May 15 12:58:29 2024 00:14:51.787 read: IOPS=5406, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1003msec) 00:14:51.787 slat (usec): min=2, max=5292, avg=87.74, stdev=411.62 00:14:51.787 clat (usec): min=2219, max=23317, avg=11335.50, stdev=3227.12 00:14:51.787 lat (usec): min=2227, max=23323, avg=11423.24, stdev=3237.15 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8717], 00:14:51.787 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11600], 00:14:51.787 | 70.00th=[12780], 80.00th=[14222], 90.00th=[15270], 95.00th=[16909], 00:14:51.787 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22676], 99.95th=[23200], 00:14:51.787 | 99.99th=[23200] 00:14:51.787 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:14:51.787 slat (usec): min=2, max=3716, avg=89.17, stdev=374.24 00:14:51.787 clat (usec): min=4877, max=18773, avg=11592.77, stdev=3060.49 00:14:51.787 lat (usec): min=5478, max=18776, avg=11681.94, stdev=3070.28 00:14:51.787 clat percentiles (usec): 00:14:51.787 | 1.00th=[ 6194], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8586], 00:14:51.787 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11731], 60.00th=[12518], 00:14:51.787 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15926], 95.00th=[16909], 00:14:51.787 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:14:51.787 | 99.99th=[18744] 00:14:51.787 bw ( KiB/s): min=20016, max=25040, per=21.98%, avg=22528.00, stdev=3552.50, samples=2 00:14:51.787 iops : min= 5004, max= 6260, avg=5632.00, stdev=888.13, samples=2 00:14:51.787 lat (msec) : 4=0.29%, 10=38.89%, 20=60.07%, 50=0.75% 00:14:51.787 cpu : usr=2.99%, sys=4.19%, ctx=1013, majf=0, minf=1 00:14:51.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:51.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.787 issued rwts: total=5423,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.787 00:14:51.787 Run status group 0 (all jobs): 00:14:51.787 READ: bw=96.4MiB/s (101MB/s), 21.1MiB/s-27.9MiB/s (22.1MB/s-29.3MB/s), io=96.7MiB (101MB), run=1002-1003msec 00:14:51.787 WRITE: bw=100MiB/s (105MB/s), 21.9MiB/s-28.3MiB/s (23.0MB/s-29.7MB/s), io=100MiB (105MB), run=1002-1003msec 00:14:51.787 00:14:51.787 Disk stats (read/write): 00:14:51.787 nvme0n1: ios=5170/5342, merge=0/0, ticks=14975/14613, in_queue=29588, util=84.07% 00:14:51.787 nvme0n2: ios=5655/6144, merge=0/0, ticks=15995/15111, in_queue=31106, util=84.46% 00:14:51.787 nvme0n3: ios=4874/5120, merge=0/0, ticks=14593/14755, in_queue=29348, util=88.00% 00:14:51.787 nvme0n4: ios=4608/4673, merge=0/0, ticks=14835/14237, in_queue=29072, util=88.71% 00:14:51.787 12:58:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:51.787 12:58:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3604894 00:14:51.787 12:58:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:51.787 12:58:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:51.787 [global] 00:14:51.787 thread=1 00:14:51.787 invalidate=1 00:14:51.787 rw=read 00:14:51.787 time_based=1 00:14:51.787 runtime=10 00:14:51.787 ioengine=libaio 00:14:51.787 direct=1 00:14:51.787 bs=4096 00:14:51.787 iodepth=1 00:14:51.787 norandommap=1 00:14:51.787 numjobs=1 00:14:51.787 00:14:51.787 [job0] 00:14:51.787 filename=/dev/nvme0n1 00:14:51.787 [job1] 00:14:51.787 filename=/dev/nvme0n2 00:14:51.787 [job2] 00:14:51.787 filename=/dev/nvme0n3 00:14:51.787 [job3] 00:14:51.787 filename=/dev/nvme0n4 00:14:51.787 Could not set queue depth (nvme0n1) 00:14:51.787 Could not set queue depth (nvme0n2) 00:14:51.787 Could not set queue depth (nvme0n3) 00:14:51.787 Could not set queue depth (nvme0n4) 00:14:52.045 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:52.045 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:52.045 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:52.045 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:52.045 fio-3.35 00:14:52.045 Starting 4 threads 00:14:54.751 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:54.751 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=89088000, buflen=4096 00:14:54.751 fio: pid=3605009, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:54.751 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:55.007 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=82669568, buflen=4096 00:14:55.007 fio: pid=3605008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:55.007 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:55.007 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:55.270 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22994944, buflen=4096 00:14:55.270 fio: pid=3605006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:55.270 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:55.270 12:58:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:55.270 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=31174656, buflen=4096 00:14:55.270 fio: pid=3605007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:55.528 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:55.528 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:55.528 00:14:55.528 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3605006: Wed May 15 12:58:33 2024 00:14:55.528 read: IOPS=7210, BW=28.2MiB/s (29.5MB/s)(85.9MiB/3051msec) 00:14:55.528 slat (usec): min=3, max=34913, avg=11.78, stdev=253.93 00:14:55.528 clat (usec): min=53, max=464, avg=125.34, stdev=31.75 00:14:55.528 lat (usec): min=61, max=35013, avg=137.12, stdev=255.72 00:14:55.528 clat percentiles (usec): 00:14:55.528 | 1.00th=[ 69], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 87], 00:14:55.528 | 30.00th=[ 115], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 133], 00:14:55.528 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 174], 00:14:55.528 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 237], 99.95th=[ 245], 00:14:55.528 | 99.99th=[ 433] 00:14:55.528 bw ( KiB/s): min=24224, max=32200, per=25.97%, avg=28014.40, stdev=3275.97, samples=5 00:14:55.528 iops : min= 6056, max= 8050, avg=7003.60, stdev=818.99, samples=5 00:14:55.528 lat (usec) : 100=25.21%, 250=74.75%, 500=0.03% 00:14:55.528 cpu : usr=2.43%, sys=8.10%, ctx=22005, majf=0, minf=1 00:14:55.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 issued rwts: total=21999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:55.528 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3605007: Wed May 15 12:58:33 2024 00:14:55.528 read: IOPS=7360, BW=28.8MiB/s (30.1MB/s)(93.7MiB/3260msec) 00:14:55.528 slat (usec): min=7, max=12668, avg=11.07, stdev=141.05 00:14:55.528 clat (usec): min=49, max=478, avg=123.33, stdev=36.34 00:14:55.528 lat (usec): min=59, max=12747, avg=134.40, stdev=145.24 00:14:55.528 clat percentiles (usec): 00:14:55.528 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 82], 00:14:55.528 | 30.00th=[ 113], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 135], 00:14:55.528 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:14:55.528 | 99.00th=[ 190], 99.50th=[ 202], 99.90th=[ 231], 99.95th=[ 243], 00:14:55.528 | 99.99th=[ 441] 00:14:55.528 bw ( KiB/s): min=23776, max=36076, per=25.96%, avg=28004.67, stdev=4615.40, samples=6 00:14:55.528 iops : min= 5944, max= 9019, avg=7001.17, stdev=1153.85, samples=6 00:14:55.528 lat (usec) : 50=0.01%, 100=26.16%, 250=73.80%, 500=0.03% 00:14:55.528 cpu : usr=2.03%, sys=8.56%, ctx=24002, majf=0, minf=1 00:14:55.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 issued rwts: total=23996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:55.528 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3605008: Wed May 15 12:58:33 2024 00:14:55.528 read: IOPS=7040, BW=27.5MiB/s (28.8MB/s)(78.8MiB/2867msec) 00:14:55.528 slat (usec): min=3, max=11907, avg=10.03, stdev=118.40 00:14:55.528 clat (usec): min=61, max=462, avg=130.45, stdev=29.69 00:14:55.528 lat (usec): min=73, max=12017, avg=140.48, stdev=122.06 00:14:55.528 clat percentiles (usec): 00:14:55.528 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 89], 20.00th=[ 96], 00:14:55.528 | 30.00th=[ 119], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 139], 00:14:55.528 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 176], 00:14:55.528 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 235], 99.95th=[ 249], 00:14:55.528 | 99.99th=[ 400] 00:14:55.528 bw ( KiB/s): min=24136, max=30440, per=25.31%, avg=27302.40, stdev=2720.61, samples=5 00:14:55.528 iops : min= 6034, max= 7610, avg=6825.60, stdev=680.15, samples=5 00:14:55.528 lat (usec) : 100=24.58%, 250=75.37%, 500=0.05% 00:14:55.528 cpu : usr=1.88%, sys=7.92%, ctx=20186, majf=0, minf=1 00:14:55.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 issued rwts: total=20184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:55.528 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3605009: Wed May 15 12:58:33 2024 00:14:55.528 read: IOPS=8094, BW=31.6MiB/s (33.2MB/s)(85.0MiB/2687msec) 00:14:55.528 slat (nsec): min=8122, max=40958, avg=8957.48, stdev=1161.01 00:14:55.528 clat (usec): min=59, max=264, avg=112.27, stdev=25.51 00:14:55.528 lat (usec): min=69, max=273, avg=121.23, stdev=25.74 00:14:55.528 clat percentiles (usec): 00:14:55.528 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 90], 00:14:55.528 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 125], 00:14:55.528 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 161], 00:14:55.528 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 215], 99.95th=[ 225], 00:14:55.528 | 99.99th=[ 239] 00:14:55.528 bw ( KiB/s): min=27760, max=39144, per=30.06%, avg=32430.40, stdev=4973.67, samples=5 00:14:55.528 iops : min= 6940, max= 9786, avg=8107.60, stdev=1243.42, samples=5 00:14:55.528 lat (usec) : 100=50.41%, 250=49.58%, 500=0.01% 00:14:55.528 cpu : usr=3.28%, sys=8.41%, ctx=21751, majf=0, minf=2 00:14:55.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.528 issued rwts: total=21751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:55.528 00:14:55.528 Run status group 0 (all jobs): 00:14:55.528 READ: bw=105MiB/s (110MB/s), 27.5MiB/s-31.6MiB/s (28.8MB/s-33.2MB/s), io=343MiB (360MB), run=2687-3260msec 00:14:55.528 00:14:55.528 Disk stats (read/write): 00:14:55.528 nvme0n1: ios=19859/0, merge=0/0, ticks=2460/0, in_queue=2460, util=93.25% 00:14:55.528 nvme0n2: ios=21707/0, merge=0/0, ticks=2718/0, in_queue=2718, util=94.33% 00:14:55.528 nvme0n3: ios=19939/0, merge=0/0, ticks=2515/0, in_queue=2515, util=95.58% 00:14:55.528 nvme0n4: ios=21010/0, merge=0/0, ticks=2258/0, in_queue=2258, util=96.45% 00:14:55.528 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:55.528 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:55.786 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:55.786 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:56.043 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:56.043 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:56.300 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:56.300 12:58:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:56.300 12:58:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:56.300 12:58:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 3604894 00:14:56.300 12:58:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:56.300 12:58:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:57.232 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:57.490 nvmf hotplug test: fio failed as expected 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:57.490 rmmod nvme_rdma 00:14:57.490 rmmod nvme_fabrics 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3602543 ']' 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3602543 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3602543 ']' 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3602543 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:14:57.490 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3602543 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3602543' 00:14:57.748 killing process with pid 3602543 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3602543 00:14:57.748 [2024-05-15 12:58:35.410691] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:57.748 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3602543 00:14:57.748 [2024-05-15 12:58:35.496499] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:58.008 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.008 12:58:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:58.008 00:14:58.008 real 0m26.263s 00:14:58.008 user 1m36.686s 00:14:58.008 sys 0m10.330s 00:14:58.009 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:58.009 12:58:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.009 ************************************ 00:14:58.009 END TEST nvmf_fio_target 00:14:58.009 ************************************ 00:14:58.009 12:58:35 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:58.009 12:58:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:58.009 12:58:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:58.009 12:58:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:58.009 ************************************ 00:14:58.009 START TEST nvmf_bdevio 00:14:58.009 ************************************ 00:14:58.009 12:58:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:58.338 * Looking for test storage... 00:14:58.338 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.338 12:58:35 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.339 12:58:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:04.905 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:04.905 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:04.905 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:04.906 Found net devices under 0000:18:00.0: mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:04.906 Found net devices under 0000:18:00.1: mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:04.906 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:04.906 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:15:04.906 altname enp24s0f0np0 00:15:04.906 altname ens785f0np0 00:15:04.906 inet 192.168.100.8/24 scope global mlx_0_0 00:15:04.906 valid_lft forever preferred_lft forever 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:04.906 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:04.906 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:15:04.906 altname enp24s0f1np1 00:15:04.906 altname ens785f1np1 00:15:04.906 inet 192.168.100.9/24 scope global mlx_0_1 00:15:04.906 valid_lft forever preferred_lft forever 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:04.906 192.168.100.9' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:04.906 192.168.100.9' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:04.906 192.168.100.9' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:04.906 12:58:41 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3608663 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3608663 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3608663 ']' 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.907 12:58:41 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:04.907 [2024-05-15 12:58:41.852354] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:15:04.907 [2024-05-15 12:58:41.852406] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.907 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.907 [2024-05-15 12:58:41.921699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.907 [2024-05-15 12:58:42.016142] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.907 [2024-05-15 12:58:42.016179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.907 [2024-05-15 12:58:42.016189] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.907 [2024-05-15 12:58:42.016198] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.907 [2024-05-15 12:58:42.016205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.907 [2024-05-15 12:58:42.016320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:04.907 [2024-05-15 12:58:42.016425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:04.907 [2024-05-15 12:58:42.016501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:04.907 [2024-05-15 12:58:42.016502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.907 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:04.907 [2024-05-15 12:58:42.744861] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c469d0/0x1c4aec0) succeed. 00:15:04.907 [2024-05-15 12:58:42.755341] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c48010/0x1c8c550) succeed. 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 Malloc0 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 [2024-05-15 12:58:42.916553] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:05.165 [2024-05-15 12:58:42.916882] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.165 { 00:15:05.165 "params": { 00:15:05.165 "name": "Nvme$subsystem", 00:15:05.165 "trtype": "$TEST_TRANSPORT", 00:15:05.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.165 "adrfam": "ipv4", 00:15:05.165 "trsvcid": "$NVMF_PORT", 00:15:05.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.165 "hdgst": ${hdgst:-false}, 00:15:05.165 "ddgst": ${ddgst:-false} 00:15:05.165 }, 00:15:05.165 "method": "bdev_nvme_attach_controller" 00:15:05.165 } 00:15:05.165 EOF 00:15:05.165 )") 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:05.165 12:58:42 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.165 "params": { 00:15:05.165 "name": "Nvme1", 00:15:05.165 "trtype": "rdma", 00:15:05.165 "traddr": "192.168.100.8", 00:15:05.165 "adrfam": "ipv4", 00:15:05.165 "trsvcid": "4420", 00:15:05.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.165 "hdgst": false, 00:15:05.165 "ddgst": false 00:15:05.165 }, 00:15:05.165 "method": "bdev_nvme_attach_controller" 00:15:05.165 }' 00:15:05.165 [2024-05-15 12:58:42.954991] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:15:05.165 [2024-05-15 12:58:42.955049] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608856 ] 00:15:05.165 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.165 [2024-05-15 12:58:43.026759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:05.422 [2024-05-15 12:58:43.113663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.422 [2024-05-15 12:58:43.113749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.422 [2024-05-15 12:58:43.113752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.422 I/O targets: 00:15:05.422 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:05.422 00:15:05.422 00:15:05.422 CUnit - A unit testing framework for C - Version 2.1-3 00:15:05.422 http://cunit.sourceforge.net/ 00:15:05.422 00:15:05.422 00:15:05.422 Suite: bdevio tests on: Nvme1n1 00:15:05.679 Test: blockdev write read block ...passed 00:15:05.679 Test: blockdev write zeroes read block ...passed 00:15:05.679 Test: blockdev write zeroes read no split ...passed 00:15:05.679 Test: blockdev write zeroes read split ...passed 00:15:05.679 Test: blockdev write zeroes read split partial ...passed 00:15:05.679 Test: blockdev reset ...[2024-05-15 12:58:43.326001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:05.680 [2024-05-15 12:58:43.349012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:05.680 [2024-05-15 12:58:43.375602] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:05.680 passed 00:15:05.680 Test: blockdev write read 8 blocks ...passed 00:15:05.680 Test: blockdev write read size > 128k ...passed 00:15:05.680 Test: blockdev write read invalid size ...passed 00:15:05.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:05.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:05.680 Test: blockdev write read max offset ...passed 00:15:05.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:05.680 Test: blockdev writev readv 8 blocks ...passed 00:15:05.680 Test: blockdev writev readv 30 x 1block ...passed 00:15:05.680 Test: blockdev writev readv block ...passed 00:15:05.680 Test: blockdev writev readv size > 128k ...passed 00:15:05.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:05.680 Test: blockdev comparev and writev ...[2024-05-15 12:58:43.378636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.378665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.378678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.378689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.378865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.378876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.378896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.379080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.379100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.379281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:05.680 [2024-05-15 12:58:43.379301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:05.680 passed 00:15:05.680 Test: blockdev nvme passthru rw ...passed 00:15:05.680 Test: blockdev nvme passthru vendor specific ...[2024-05-15 12:58:43.379571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:05.680 [2024-05-15 12:58:43.379586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:05.680 [2024-05-15 12:58:43.379643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:05.680 [2024-05-15 12:58:43.379702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:05.680 [2024-05-15 12:58:43.379747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:05.680 [2024-05-15 12:58:43.379757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:05.680 passed 00:15:05.680 Test: blockdev nvme admin passthru ...passed 00:15:05.680 Test: blockdev copy ...passed 00:15:05.680 00:15:05.680 Run Summary: Type Total Ran Passed Failed Inactive 00:15:05.680 suites 1 1 n/a 0 0 00:15:05.680 tests 23 23 23 0 0 00:15:05.680 asserts 152 152 152 0 n/a 00:15:05.680 00:15:05.680 Elapsed time = 0.172 seconds 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:05.937 12:58:43 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:05.938 rmmod nvme_rdma 00:15:05.938 rmmod nvme_fabrics 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3608663 ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3608663 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3608663 ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3608663 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3608663 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3608663' 00:15:05.938 killing process with pid 3608663 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3608663 00:15:05.938 [2024-05-15 12:58:43.734748] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.938 12:58:43 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3608663 00:15:06.196 [2024-05-15 12:58:43.821150] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:06.196 12:58:44 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.196 12:58:44 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:06.196 00:15:06.196 real 0m8.258s 00:15:06.196 user 0m10.817s 00:15:06.196 sys 0m5.112s 00:15:06.196 12:58:44 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.196 12:58:44 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:06.196 ************************************ 00:15:06.196 END TEST nvmf_bdevio 00:15:06.196 ************************************ 00:15:06.455 12:58:44 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:06.455 12:58:44 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:06.455 12:58:44 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.455 12:58:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:06.455 ************************************ 00:15:06.455 START TEST nvmf_auth_target 00:15:06.455 ************************************ 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:06.455 * Looking for test storage... 00:15:06.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.455 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.456 12:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:13.021 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:13.021 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.021 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:13.022 Found net devices under 0000:18:00.0: mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:13.022 Found net devices under 0000:18:00.1: mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:13.022 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:13.022 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:15:13.022 altname enp24s0f0np0 00:15:13.022 altname ens785f0np0 00:15:13.022 inet 192.168.100.8/24 scope global mlx_0_0 00:15:13.022 valid_lft forever preferred_lft forever 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:13.022 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:13.022 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:15:13.022 altname enp24s0f1np1 00:15:13.022 altname ens785f1np1 00:15:13.022 inet 192.168.100.9/24 scope global mlx_0_1 00:15:13.022 valid_lft forever preferred_lft forever 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:13.022 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:13.023 192.168.100.9' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:13.023 192.168.100.9' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:13.023 192.168.100.9' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3611780 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3611780 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3611780 ']' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.023 12:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=3611971 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a1596d6727daf7a33c3714aaee77e20e157d4c216ea973e 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4co 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a1596d6727daf7a33c3714aaee77e20e157d4c216ea973e 0 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a1596d6727daf7a33c3714aaee77e20e157d4c216ea973e 0 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a1596d6727daf7a33c3714aaee77e20e157d4c216ea973e 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4co 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4co 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.4co 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:13.590 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d62abdd6cf2f5fb57f4e14138a22b343 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zws 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d62abdd6cf2f5fb57f4e14138a22b343 1 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d62abdd6cf2f5fb57f4e14138a22b343 1 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:13.849 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d62abdd6cf2f5fb57f4e14138a22b343 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zws 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zws 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.zws 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62e204f944676d9c59ffe0b0415d9785ac3eefa1b05d9a10 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.z9C 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62e204f944676d9c59ffe0b0415d9785ac3eefa1b05d9a10 2 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62e204f944676d9c59ffe0b0415d9785ac3eefa1b05d9a10 2 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62e204f944676d9c59ffe0b0415d9785ac3eefa1b05d9a10 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.z9C 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.z9C 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.z9C 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5eb952f30149af269bd9940dd413277508db398f42ffd661a00c5a0c9399c5c0 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eXv 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5eb952f30149af269bd9940dd413277508db398f42ffd661a00c5a0c9399c5c0 3 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5eb952f30149af269bd9940dd413277508db398f42ffd661a00c5a0c9399c5c0 3 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5eb952f30149af269bd9940dd413277508db398f42ffd661a00c5a0c9399c5c0 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eXv 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eXv 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.eXv 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 3611780 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3611780 ']' 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.850 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 3611971 /var/tmp/host.sock 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3611971 ']' 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:14.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:14.108 12:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4co 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4co 00:15:14.370 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4co 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zws 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zws 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zws 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.z9C 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.z9C 00:15:14.630 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.z9C 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.eXv 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.eXv 00:15:14.889 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.eXv 00:15:15.147 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:15.147 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.147 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:15.147 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.147 12:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:15.147 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:15.405 00:15:15.405 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:15.405 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:15.405 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:15.664 { 00:15:15.664 "cntlid": 1, 00:15:15.664 "qid": 0, 00:15:15.664 "state": "enabled", 00:15:15.664 "listen_address": { 00:15:15.664 "trtype": "RDMA", 00:15:15.664 "adrfam": "IPv4", 00:15:15.664 "traddr": "192.168.100.8", 00:15:15.664 "trsvcid": "4420" 00:15:15.664 }, 00:15:15.664 "peer_address": { 00:15:15.664 "trtype": "RDMA", 00:15:15.664 "adrfam": "IPv4", 00:15:15.664 "traddr": "192.168.100.8", 00:15:15.664 "trsvcid": "40052" 00:15:15.664 }, 00:15:15.664 "auth": { 00:15:15.664 "state": "completed", 00:15:15.664 "digest": "sha256", 00:15:15.664 "dhgroup": "null" 00:15:15.664 } 00:15:15.664 } 00:15:15.664 ]' 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.664 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.923 12:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:16.935 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:17.193 00:15:17.193 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:17.193 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.193 12:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:17.451 { 00:15:17.451 "cntlid": 3, 00:15:17.451 "qid": 0, 00:15:17.451 "state": "enabled", 00:15:17.451 "listen_address": { 00:15:17.451 "trtype": "RDMA", 00:15:17.451 "adrfam": "IPv4", 00:15:17.451 "traddr": "192.168.100.8", 00:15:17.451 "trsvcid": "4420" 00:15:17.451 }, 00:15:17.451 "peer_address": { 00:15:17.451 "trtype": "RDMA", 00:15:17.451 "adrfam": "IPv4", 00:15:17.451 "traddr": "192.168.100.8", 00:15:17.451 "trsvcid": "58706" 00:15:17.451 }, 00:15:17.451 "auth": { 00:15:17.451 "state": "completed", 00:15:17.451 "digest": "sha256", 00:15:17.451 "dhgroup": "null" 00:15:17.451 } 00:15:17.451 } 00:15:17.451 ]' 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.451 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.709 12:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:18.276 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:18.534 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:18.792 00:15:18.793 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:18.793 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:18.793 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:19.051 { 00:15:19.051 "cntlid": 5, 00:15:19.051 "qid": 0, 00:15:19.051 "state": "enabled", 00:15:19.051 "listen_address": { 00:15:19.051 "trtype": "RDMA", 00:15:19.051 "adrfam": "IPv4", 00:15:19.051 "traddr": "192.168.100.8", 00:15:19.051 "trsvcid": "4420" 00:15:19.051 }, 00:15:19.051 "peer_address": { 00:15:19.051 "trtype": "RDMA", 00:15:19.051 "adrfam": "IPv4", 00:15:19.051 "traddr": "192.168.100.8", 00:15:19.051 "trsvcid": "41719" 00:15:19.051 }, 00:15:19.051 "auth": { 00:15:19.051 "state": "completed", 00:15:19.051 "digest": "sha256", 00:15:19.051 "dhgroup": "null" 00:15:19.051 } 00:15:19.051 } 00:15:19.051 ]' 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:19.051 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:19.309 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.309 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.309 12:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.309 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:19.874 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.131 12:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.389 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.390 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.390 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:20.647 { 00:15:20.647 "cntlid": 7, 00:15:20.647 "qid": 0, 00:15:20.647 "state": "enabled", 00:15:20.647 "listen_address": { 00:15:20.647 "trtype": "RDMA", 00:15:20.647 "adrfam": "IPv4", 00:15:20.647 "traddr": "192.168.100.8", 00:15:20.647 "trsvcid": "4420" 00:15:20.647 }, 00:15:20.647 "peer_address": { 00:15:20.647 "trtype": "RDMA", 00:15:20.647 "adrfam": "IPv4", 00:15:20.647 "traddr": "192.168.100.8", 00:15:20.647 "trsvcid": "39685" 00:15:20.647 }, 00:15:20.647 "auth": { 00:15:20.647 "state": "completed", 00:15:20.647 "digest": "sha256", 00:15:20.647 "dhgroup": "null" 00:15:20.647 } 00:15:20.647 } 00:15:20.647 ]' 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.647 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.905 12:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:21.839 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:22.097 00:15:22.097 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:22.097 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:22.097 12:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.363 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:22.363 { 00:15:22.363 "cntlid": 9, 00:15:22.363 "qid": 0, 00:15:22.363 "state": "enabled", 00:15:22.363 "listen_address": { 00:15:22.363 "trtype": "RDMA", 00:15:22.363 "adrfam": "IPv4", 00:15:22.363 "traddr": "192.168.100.8", 00:15:22.363 "trsvcid": "4420" 00:15:22.364 }, 00:15:22.364 "peer_address": { 00:15:22.364 "trtype": "RDMA", 00:15:22.364 "adrfam": "IPv4", 00:15:22.364 "traddr": "192.168.100.8", 00:15:22.364 "trsvcid": "57057" 00:15:22.364 }, 00:15:22.364 "auth": { 00:15:22.364 "state": "completed", 00:15:22.364 "digest": "sha256", 00:15:22.364 "dhgroup": "ffdhe2048" 00:15:22.364 } 00:15:22.364 } 00:15:22.364 ]' 00:15:22.364 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:22.364 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.364 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.625 12:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:23.560 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:23.819 00:15:23.819 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:23.819 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.819 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:24.078 { 00:15:24.078 "cntlid": 11, 00:15:24.078 "qid": 0, 00:15:24.078 "state": "enabled", 00:15:24.078 "listen_address": { 00:15:24.078 "trtype": "RDMA", 00:15:24.078 "adrfam": "IPv4", 00:15:24.078 "traddr": "192.168.100.8", 00:15:24.078 "trsvcid": "4420" 00:15:24.078 }, 00:15:24.078 "peer_address": { 00:15:24.078 "trtype": "RDMA", 00:15:24.078 "adrfam": "IPv4", 00:15:24.078 "traddr": "192.168.100.8", 00:15:24.078 "trsvcid": "42827" 00:15:24.078 }, 00:15:24.078 "auth": { 00:15:24.078 "state": "completed", 00:15:24.078 "digest": "sha256", 00:15:24.078 "dhgroup": "ffdhe2048" 00:15:24.078 } 00:15:24.078 } 00:15:24.078 ]' 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:24.078 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:24.336 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.336 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.336 12:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.336 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.272 12:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:25.272 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:25.529 00:15:25.529 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:25.529 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:25.529 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:25.786 { 00:15:25.786 "cntlid": 13, 00:15:25.786 "qid": 0, 00:15:25.786 "state": "enabled", 00:15:25.786 "listen_address": { 00:15:25.786 "trtype": "RDMA", 00:15:25.786 "adrfam": "IPv4", 00:15:25.786 "traddr": "192.168.100.8", 00:15:25.786 "trsvcid": "4420" 00:15:25.786 }, 00:15:25.786 "peer_address": { 00:15:25.786 "trtype": "RDMA", 00:15:25.786 "adrfam": "IPv4", 00:15:25.786 "traddr": "192.168.100.8", 00:15:25.786 "trsvcid": "50122" 00:15:25.786 }, 00:15:25.786 "auth": { 00:15:25.786 "state": "completed", 00:15:25.786 "digest": "sha256", 00:15:25.786 "dhgroup": "ffdhe2048" 00:15:25.786 } 00:15:25.786 } 00:15:25.786 ]' 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.786 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:25.787 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.787 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:26.044 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.044 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.044 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.044 12:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:26.611 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.870 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.129 12:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.387 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:27.387 { 00:15:27.387 "cntlid": 15, 00:15:27.387 "qid": 0, 00:15:27.387 "state": "enabled", 00:15:27.387 "listen_address": { 00:15:27.387 "trtype": "RDMA", 00:15:27.387 "adrfam": "IPv4", 00:15:27.387 "traddr": "192.168.100.8", 00:15:27.387 "trsvcid": "4420" 00:15:27.387 }, 00:15:27.387 "peer_address": { 00:15:27.387 "trtype": "RDMA", 00:15:27.387 "adrfam": "IPv4", 00:15:27.387 "traddr": "192.168.100.8", 00:15:27.387 "trsvcid": "48429" 00:15:27.387 }, 00:15:27.387 "auth": { 00:15:27.387 "state": "completed", 00:15:27.387 "digest": "sha256", 00:15:27.387 "dhgroup": "ffdhe2048" 00:15:27.387 } 00:15:27.387 } 00:15:27.387 ]' 00:15:27.387 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.645 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.904 12:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.471 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:28.731 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:28.989 00:15:28.989 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:28.989 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:28.989 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:29.248 { 00:15:29.248 "cntlid": 17, 00:15:29.248 "qid": 0, 00:15:29.248 "state": "enabled", 00:15:29.248 "listen_address": { 00:15:29.248 "trtype": "RDMA", 00:15:29.248 "adrfam": "IPv4", 00:15:29.248 "traddr": "192.168.100.8", 00:15:29.248 "trsvcid": "4420" 00:15:29.248 }, 00:15:29.248 "peer_address": { 00:15:29.248 "trtype": "RDMA", 00:15:29.248 "adrfam": "IPv4", 00:15:29.248 "traddr": "192.168.100.8", 00:15:29.248 "trsvcid": "39072" 00:15:29.248 }, 00:15:29.248 "auth": { 00:15:29.248 "state": "completed", 00:15:29.248 "digest": "sha256", 00:15:29.248 "dhgroup": "ffdhe3072" 00:15:29.248 } 00:15:29.248 } 00:15:29.248 ]' 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.248 12:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:29.248 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.248 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.248 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.506 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.072 12:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:30.331 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:30.590 00:15:30.590 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:30.590 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:30.590 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:30.849 { 00:15:30.849 "cntlid": 19, 00:15:30.849 "qid": 0, 00:15:30.849 "state": "enabled", 00:15:30.849 "listen_address": { 00:15:30.849 "trtype": "RDMA", 00:15:30.849 "adrfam": "IPv4", 00:15:30.849 "traddr": "192.168.100.8", 00:15:30.849 "trsvcid": "4420" 00:15:30.849 }, 00:15:30.849 "peer_address": { 00:15:30.849 "trtype": "RDMA", 00:15:30.849 "adrfam": "IPv4", 00:15:30.849 "traddr": "192.168.100.8", 00:15:30.849 "trsvcid": "50926" 00:15:30.849 }, 00:15:30.849 "auth": { 00:15:30.849 "state": "completed", 00:15:30.849 "digest": "sha256", 00:15:30.849 "dhgroup": "ffdhe3072" 00:15:30.849 } 00:15:30.849 } 00:15:30.849 ]' 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.849 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.108 12:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.674 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:31.933 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:32.192 00:15:32.192 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:32.192 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:32.192 12:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:32.450 { 00:15:32.450 "cntlid": 21, 00:15:32.450 "qid": 0, 00:15:32.450 "state": "enabled", 00:15:32.450 "listen_address": { 00:15:32.450 "trtype": "RDMA", 00:15:32.450 "adrfam": "IPv4", 00:15:32.450 "traddr": "192.168.100.8", 00:15:32.450 "trsvcid": "4420" 00:15:32.450 }, 00:15:32.450 "peer_address": { 00:15:32.450 "trtype": "RDMA", 00:15:32.450 "adrfam": "IPv4", 00:15:32.450 "traddr": "192.168.100.8", 00:15:32.450 "trsvcid": "32889" 00:15:32.450 }, 00:15:32.450 "auth": { 00:15:32.450 "state": "completed", 00:15:32.450 "digest": "sha256", 00:15:32.450 "dhgroup": "ffdhe3072" 00:15:32.450 } 00:15:32.450 } 00:15:32.450 ]' 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.450 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.709 12:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:33.275 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.276 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:33.276 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.276 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.276 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:33.534 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.534 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.535 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.793 00:15:33.793 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:33.793 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:33.793 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:34.051 { 00:15:34.051 "cntlid": 23, 00:15:34.051 "qid": 0, 00:15:34.051 "state": "enabled", 00:15:34.051 "listen_address": { 00:15:34.051 "trtype": "RDMA", 00:15:34.051 "adrfam": "IPv4", 00:15:34.051 "traddr": "192.168.100.8", 00:15:34.051 "trsvcid": "4420" 00:15:34.051 }, 00:15:34.051 "peer_address": { 00:15:34.051 "trtype": "RDMA", 00:15:34.051 "adrfam": "IPv4", 00:15:34.051 "traddr": "192.168.100.8", 00:15:34.051 "trsvcid": "43489" 00:15:34.051 }, 00:15:34.051 "auth": { 00:15:34.051 "state": "completed", 00:15:34.051 "digest": "sha256", 00:15:34.051 "dhgroup": "ffdhe3072" 00:15:34.051 } 00:15:34.051 } 00:15:34.051 ]' 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.051 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:34.309 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.309 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.309 12:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.309 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:34.876 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.136 12:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:35.395 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:35.654 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:35.654 { 00:15:35.654 "cntlid": 25, 00:15:35.654 "qid": 0, 00:15:35.654 "state": "enabled", 00:15:35.654 "listen_address": { 00:15:35.654 "trtype": "RDMA", 00:15:35.654 "adrfam": "IPv4", 00:15:35.654 "traddr": "192.168.100.8", 00:15:35.654 "trsvcid": "4420" 00:15:35.654 }, 00:15:35.654 "peer_address": { 00:15:35.654 "trtype": "RDMA", 00:15:35.654 "adrfam": "IPv4", 00:15:35.654 "traddr": "192.168.100.8", 00:15:35.654 "trsvcid": "40609" 00:15:35.654 }, 00:15:35.654 "auth": { 00:15:35.654 "state": "completed", 00:15:35.654 "digest": "sha256", 00:15:35.654 "dhgroup": "ffdhe4096" 00:15:35.654 } 00:15:35.654 } 00:15:35.654 ]' 00:15:35.654 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.912 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.170 12:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.737 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:36.996 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:37.254 00:15:37.254 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:37.254 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:37.254 12:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:37.512 { 00:15:37.512 "cntlid": 27, 00:15:37.512 "qid": 0, 00:15:37.512 "state": "enabled", 00:15:37.512 "listen_address": { 00:15:37.512 "trtype": "RDMA", 00:15:37.512 "adrfam": "IPv4", 00:15:37.512 "traddr": "192.168.100.8", 00:15:37.512 "trsvcid": "4420" 00:15:37.512 }, 00:15:37.512 "peer_address": { 00:15:37.512 "trtype": "RDMA", 00:15:37.512 "adrfam": "IPv4", 00:15:37.512 "traddr": "192.168.100.8", 00:15:37.512 "trsvcid": "35978" 00:15:37.512 }, 00:15:37.512 "auth": { 00:15:37.512 "state": "completed", 00:15:37.512 "digest": "sha256", 00:15:37.512 "dhgroup": "ffdhe4096" 00:15:37.512 } 00:15:37.512 } 00:15:37.512 ]' 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.512 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.771 12:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.337 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:38.595 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:38.853 00:15:38.853 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:38.853 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:38.853 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:39.111 { 00:15:39.111 "cntlid": 29, 00:15:39.111 "qid": 0, 00:15:39.111 "state": "enabled", 00:15:39.111 "listen_address": { 00:15:39.111 "trtype": "RDMA", 00:15:39.111 "adrfam": "IPv4", 00:15:39.111 "traddr": "192.168.100.8", 00:15:39.111 "trsvcid": "4420" 00:15:39.111 }, 00:15:39.111 "peer_address": { 00:15:39.111 "trtype": "RDMA", 00:15:39.111 "adrfam": "IPv4", 00:15:39.111 "traddr": "192.168.100.8", 00:15:39.111 "trsvcid": "56904" 00:15:39.111 }, 00:15:39.111 "auth": { 00:15:39.111 "state": "completed", 00:15:39.111 "digest": "sha256", 00:15:39.111 "dhgroup": "ffdhe4096" 00:15:39.111 } 00:15:39.111 } 00:15:39.111 ]' 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.111 12:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:39.369 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.369 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.369 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.369 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:39.934 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.191 12:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.449 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.706 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.706 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:40.964 { 00:15:40.964 "cntlid": 31, 00:15:40.964 "qid": 0, 00:15:40.964 "state": "enabled", 00:15:40.964 "listen_address": { 00:15:40.964 "trtype": "RDMA", 00:15:40.964 "adrfam": "IPv4", 00:15:40.964 "traddr": "192.168.100.8", 00:15:40.964 "trsvcid": "4420" 00:15:40.964 }, 00:15:40.964 "peer_address": { 00:15:40.964 "trtype": "RDMA", 00:15:40.964 "adrfam": "IPv4", 00:15:40.964 "traddr": "192.168.100.8", 00:15:40.964 "trsvcid": "43785" 00:15:40.964 }, 00:15:40.964 "auth": { 00:15:40.964 "state": "completed", 00:15:40.964 "digest": "sha256", 00:15:40.964 "dhgroup": "ffdhe4096" 00:15:40.964 } 00:15:40.964 } 00:15:40.964 ]' 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.964 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.221 12:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.787 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:42.046 12:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:42.305 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:42.563 { 00:15:42.563 "cntlid": 33, 00:15:42.563 "qid": 0, 00:15:42.563 "state": "enabled", 00:15:42.563 "listen_address": { 00:15:42.563 "trtype": "RDMA", 00:15:42.563 "adrfam": "IPv4", 00:15:42.563 "traddr": "192.168.100.8", 00:15:42.563 "trsvcid": "4420" 00:15:42.563 }, 00:15:42.563 "peer_address": { 00:15:42.563 "trtype": "RDMA", 00:15:42.563 "adrfam": "IPv4", 00:15:42.563 "traddr": "192.168.100.8", 00:15:42.563 "trsvcid": "42283" 00:15:42.563 }, 00:15:42.563 "auth": { 00:15:42.563 "state": "completed", 00:15:42.563 "digest": "sha256", 00:15:42.563 "dhgroup": "ffdhe6144" 00:15:42.563 } 00:15:42.563 } 00:15:42.563 ]' 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:42.563 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.821 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.079 12:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:43.644 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.644 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:43.644 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.644 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.645 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.645 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:43.645 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.645 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:43.903 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:44.161 00:15:44.161 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:44.161 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:44.161 12:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:44.419 { 00:15:44.419 "cntlid": 35, 00:15:44.419 "qid": 0, 00:15:44.419 "state": "enabled", 00:15:44.419 "listen_address": { 00:15:44.419 "trtype": "RDMA", 00:15:44.419 "adrfam": "IPv4", 00:15:44.419 "traddr": "192.168.100.8", 00:15:44.419 "trsvcid": "4420" 00:15:44.419 }, 00:15:44.419 "peer_address": { 00:15:44.419 "trtype": "RDMA", 00:15:44.419 "adrfam": "IPv4", 00:15:44.419 "traddr": "192.168.100.8", 00:15:44.419 "trsvcid": "49580" 00:15:44.419 }, 00:15:44.419 "auth": { 00:15:44.419 "state": "completed", 00:15:44.419 "digest": "sha256", 00:15:44.419 "dhgroup": "ffdhe6144" 00:15:44.419 } 00:15:44.419 } 00:15:44.419 ]' 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.419 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.677 12:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:45.242 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.498 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:45.498 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.498 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.498 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.498 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:45.499 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.499 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.756 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.757 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:45.757 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:46.015 00:15:46.015 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:46.015 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:46.015 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:46.273 { 00:15:46.273 "cntlid": 37, 00:15:46.273 "qid": 0, 00:15:46.273 "state": "enabled", 00:15:46.273 "listen_address": { 00:15:46.273 "trtype": "RDMA", 00:15:46.273 "adrfam": "IPv4", 00:15:46.273 "traddr": "192.168.100.8", 00:15:46.273 "trsvcid": "4420" 00:15:46.273 }, 00:15:46.273 "peer_address": { 00:15:46.273 "trtype": "RDMA", 00:15:46.273 "adrfam": "IPv4", 00:15:46.273 "traddr": "192.168.100.8", 00:15:46.273 "trsvcid": "48508" 00:15:46.273 }, 00:15:46.273 "auth": { 00:15:46.273 "state": "completed", 00:15:46.273 "digest": "sha256", 00:15:46.273 "dhgroup": "ffdhe6144" 00:15:46.273 } 00:15:46.273 } 00:15:46.273 ]' 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.273 12:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:46.273 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.273 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:46.273 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.273 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.273 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.531 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:47.096 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.096 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:47.096 12:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.096 12:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.354 12:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.354 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:47.354 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.354 12:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.354 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.611 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:47.869 { 00:15:47.869 "cntlid": 39, 00:15:47.869 "qid": 0, 00:15:47.869 "state": "enabled", 00:15:47.869 "listen_address": { 00:15:47.869 "trtype": "RDMA", 00:15:47.869 "adrfam": "IPv4", 00:15:47.869 "traddr": "192.168.100.8", 00:15:47.869 "trsvcid": "4420" 00:15:47.869 }, 00:15:47.869 "peer_address": { 00:15:47.869 "trtype": "RDMA", 00:15:47.869 "adrfam": "IPv4", 00:15:47.869 "traddr": "192.168.100.8", 00:15:47.869 "trsvcid": "41493" 00:15:47.869 }, 00:15:47.869 "auth": { 00:15:47.869 "state": "completed", 00:15:47.869 "digest": "sha256", 00:15:47.869 "dhgroup": "ffdhe6144" 00:15:47.869 } 00:15:47.869 } 00:15:47.869 ]' 00:15:47.869 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:48.127 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.127 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:48.127 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.127 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:48.128 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.128 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.128 12:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.385 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.952 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:49.267 12:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:49.576 00:15:49.576 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:49.576 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:49.576 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:49.834 { 00:15:49.834 "cntlid": 41, 00:15:49.834 "qid": 0, 00:15:49.834 "state": "enabled", 00:15:49.834 "listen_address": { 00:15:49.834 "trtype": "RDMA", 00:15:49.834 "adrfam": "IPv4", 00:15:49.834 "traddr": "192.168.100.8", 00:15:49.834 "trsvcid": "4420" 00:15:49.834 }, 00:15:49.834 "peer_address": { 00:15:49.834 "trtype": "RDMA", 00:15:49.834 "adrfam": "IPv4", 00:15:49.834 "traddr": "192.168.100.8", 00:15:49.834 "trsvcid": "49808" 00:15:49.834 }, 00:15:49.834 "auth": { 00:15:49.834 "state": "completed", 00:15:49.834 "digest": "sha256", 00:15:49.834 "dhgroup": "ffdhe8192" 00:15:49.834 } 00:15:49.834 } 00:15:49.834 ]' 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.834 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:50.093 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.093 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.093 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.093 12:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.026 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:51.027 12:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:51.594 00:15:51.594 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:51.594 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:51.594 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:51.852 { 00:15:51.852 "cntlid": 43, 00:15:51.852 "qid": 0, 00:15:51.852 "state": "enabled", 00:15:51.852 "listen_address": { 00:15:51.852 "trtype": "RDMA", 00:15:51.852 "adrfam": "IPv4", 00:15:51.852 "traddr": "192.168.100.8", 00:15:51.852 "trsvcid": "4420" 00:15:51.852 }, 00:15:51.852 "peer_address": { 00:15:51.852 "trtype": "RDMA", 00:15:51.852 "adrfam": "IPv4", 00:15:51.852 "traddr": "192.168.100.8", 00:15:51.852 "trsvcid": "40748" 00:15:51.852 }, 00:15:51.852 "auth": { 00:15:51.852 "state": "completed", 00:15:51.852 "digest": "sha256", 00:15:51.852 "dhgroup": "ffdhe8192" 00:15:51.852 } 00:15:51.852 } 00:15:51.852 ]' 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.852 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.111 12:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:52.677 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:52.935 12:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.501 00:15:53.501 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:53.501 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.501 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:53.760 { 00:15:53.760 "cntlid": 45, 00:15:53.760 "qid": 0, 00:15:53.760 "state": "enabled", 00:15:53.760 "listen_address": { 00:15:53.760 "trtype": "RDMA", 00:15:53.760 "adrfam": "IPv4", 00:15:53.760 "traddr": "192.168.100.8", 00:15:53.760 "trsvcid": "4420" 00:15:53.760 }, 00:15:53.760 "peer_address": { 00:15:53.760 "trtype": "RDMA", 00:15:53.760 "adrfam": "IPv4", 00:15:53.760 "traddr": "192.168.100.8", 00:15:53.760 "trsvcid": "35465" 00:15:53.760 }, 00:15:53.760 "auth": { 00:15:53.760 "state": "completed", 00:15:53.760 "digest": "sha256", 00:15:53.760 "dhgroup": "ffdhe8192" 00:15:53.760 } 00:15:53.760 } 00:15:53.760 ]' 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.760 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.018 12:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:15:54.586 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.845 12:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.412 00:15:55.412 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:55.412 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:55.412 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.670 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.670 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.670 12:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:55.671 { 00:15:55.671 "cntlid": 47, 00:15:55.671 "qid": 0, 00:15:55.671 "state": "enabled", 00:15:55.671 "listen_address": { 00:15:55.671 "trtype": "RDMA", 00:15:55.671 "adrfam": "IPv4", 00:15:55.671 "traddr": "192.168.100.8", 00:15:55.671 "trsvcid": "4420" 00:15:55.671 }, 00:15:55.671 "peer_address": { 00:15:55.671 "trtype": "RDMA", 00:15:55.671 "adrfam": "IPv4", 00:15:55.671 "traddr": "192.168.100.8", 00:15:55.671 "trsvcid": "47661" 00:15:55.671 }, 00:15:55.671 "auth": { 00:15:55.671 "state": "completed", 00:15:55.671 "digest": "sha256", 00:15:55.671 "dhgroup": "ffdhe8192" 00:15:55.671 } 00:15:55.671 } 00:15:55.671 ]' 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.671 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.929 12:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:15:56.498 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:56.758 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:57.017 00:15:57.017 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:57.017 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:57.017 12:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:57.277 { 00:15:57.277 "cntlid": 49, 00:15:57.277 "qid": 0, 00:15:57.277 "state": "enabled", 00:15:57.277 "listen_address": { 00:15:57.277 "trtype": "RDMA", 00:15:57.277 "adrfam": "IPv4", 00:15:57.277 "traddr": "192.168.100.8", 00:15:57.277 "trsvcid": "4420" 00:15:57.277 }, 00:15:57.277 "peer_address": { 00:15:57.277 "trtype": "RDMA", 00:15:57.277 "adrfam": "IPv4", 00:15:57.277 "traddr": "192.168.100.8", 00:15:57.277 "trsvcid": "37022" 00:15:57.277 }, 00:15:57.277 "auth": { 00:15:57.277 "state": "completed", 00:15:57.277 "digest": "sha384", 00:15:57.277 "dhgroup": "null" 00:15:57.277 } 00:15:57.277 } 00:15:57.277 ]' 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:57.277 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:57.537 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.537 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.537 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.537 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:15:58.105 12:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.364 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:58.623 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:58.883 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:58.883 { 00:15:58.883 "cntlid": 51, 00:15:58.883 "qid": 0, 00:15:58.883 "state": "enabled", 00:15:58.883 "listen_address": { 00:15:58.883 "trtype": "RDMA", 00:15:58.883 "adrfam": "IPv4", 00:15:58.883 "traddr": "192.168.100.8", 00:15:58.883 "trsvcid": "4420" 00:15:58.883 }, 00:15:58.883 "peer_address": { 00:15:58.883 "trtype": "RDMA", 00:15:58.883 "adrfam": "IPv4", 00:15:58.883 "traddr": "192.168.100.8", 00:15:58.883 "trsvcid": "54318" 00:15:58.883 }, 00:15:58.883 "auth": { 00:15:58.883 "state": "completed", 00:15:58.883 "digest": "sha384", 00:15:58.883 "dhgroup": "null" 00:15:58.883 } 00:15:58.883 } 00:15:58.883 ]' 00:15:58.883 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.142 12:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.403 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.971 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:00.230 12:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:00.495 00:16:00.495 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:00.496 { 00:16:00.496 "cntlid": 53, 00:16:00.496 "qid": 0, 00:16:00.496 "state": "enabled", 00:16:00.496 "listen_address": { 00:16:00.496 "trtype": "RDMA", 00:16:00.496 "adrfam": "IPv4", 00:16:00.496 "traddr": "192.168.100.8", 00:16:00.496 "trsvcid": "4420" 00:16:00.496 }, 00:16:00.496 "peer_address": { 00:16:00.496 "trtype": "RDMA", 00:16:00.496 "adrfam": "IPv4", 00:16:00.496 "traddr": "192.168.100.8", 00:16:00.496 "trsvcid": "35465" 00:16:00.496 }, 00:16:00.496 "auth": { 00:16:00.496 "state": "completed", 00:16:00.496 "digest": "sha384", 00:16:00.496 "dhgroup": "null" 00:16:00.496 } 00:16:00.496 } 00:16:00.496 ]' 00:16:00.496 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.754 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.012 12:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.579 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.838 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.097 00:16:02.097 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:02.097 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.097 12:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:02.355 { 00:16:02.355 "cntlid": 55, 00:16:02.355 "qid": 0, 00:16:02.355 "state": "enabled", 00:16:02.355 "listen_address": { 00:16:02.355 "trtype": "RDMA", 00:16:02.355 "adrfam": "IPv4", 00:16:02.355 "traddr": "192.168.100.8", 00:16:02.355 "trsvcid": "4420" 00:16:02.355 }, 00:16:02.355 "peer_address": { 00:16:02.355 "trtype": "RDMA", 00:16:02.355 "adrfam": "IPv4", 00:16:02.355 "traddr": "192.168.100.8", 00:16:02.355 "trsvcid": "52868" 00:16:02.355 }, 00:16:02.355 "auth": { 00:16:02.355 "state": "completed", 00:16:02.355 "digest": "sha384", 00:16:02.355 "dhgroup": "null" 00:16:02.355 } 00:16:02.355 } 00:16:02.355 ]' 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.355 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.614 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:03.181 12:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:03.441 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:03.700 00:16:03.700 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:03.700 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:03.700 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:03.959 { 00:16:03.959 "cntlid": 57, 00:16:03.959 "qid": 0, 00:16:03.959 "state": "enabled", 00:16:03.959 "listen_address": { 00:16:03.959 "trtype": "RDMA", 00:16:03.959 "adrfam": "IPv4", 00:16:03.959 "traddr": "192.168.100.8", 00:16:03.959 "trsvcid": "4420" 00:16:03.959 }, 00:16:03.959 "peer_address": { 00:16:03.959 "trtype": "RDMA", 00:16:03.959 "adrfam": "IPv4", 00:16:03.959 "traddr": "192.168.100.8", 00:16:03.959 "trsvcid": "39683" 00:16:03.959 }, 00:16:03.959 "auth": { 00:16:03.959 "state": "completed", 00:16:03.959 "digest": "sha384", 00:16:03.959 "dhgroup": "ffdhe2048" 00:16:03.959 } 00:16:03.959 } 00:16:03.959 ]' 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.959 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:04.218 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.218 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.218 12:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.218 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:04.785 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.042 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:05.042 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.042 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.043 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.043 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:05.043 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.043 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:05.301 12:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:05.560 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.560 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:05.560 { 00:16:05.560 "cntlid": 59, 00:16:05.560 "qid": 0, 00:16:05.560 "state": "enabled", 00:16:05.560 "listen_address": { 00:16:05.560 "trtype": "RDMA", 00:16:05.560 "adrfam": "IPv4", 00:16:05.561 "traddr": "192.168.100.8", 00:16:05.561 "trsvcid": "4420" 00:16:05.561 }, 00:16:05.561 "peer_address": { 00:16:05.561 "trtype": "RDMA", 00:16:05.561 "adrfam": "IPv4", 00:16:05.561 "traddr": "192.168.100.8", 00:16:05.561 "trsvcid": "43247" 00:16:05.561 }, 00:16:05.561 "auth": { 00:16:05.561 "state": "completed", 00:16:05.561 "digest": "sha384", 00:16:05.561 "dhgroup": "ffdhe2048" 00:16:05.561 } 00:16:05.561 } 00:16:05.561 ]' 00:16:05.561 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.819 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.077 12:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.645 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:06.905 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:07.163 00:16:07.163 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:07.163 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:07.163 12:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:07.421 { 00:16:07.421 "cntlid": 61, 00:16:07.421 "qid": 0, 00:16:07.421 "state": "enabled", 00:16:07.421 "listen_address": { 00:16:07.421 "trtype": "RDMA", 00:16:07.421 "adrfam": "IPv4", 00:16:07.421 "traddr": "192.168.100.8", 00:16:07.421 "trsvcid": "4420" 00:16:07.421 }, 00:16:07.421 "peer_address": { 00:16:07.421 "trtype": "RDMA", 00:16:07.421 "adrfam": "IPv4", 00:16:07.421 "traddr": "192.168.100.8", 00:16:07.421 "trsvcid": "36122" 00:16:07.421 }, 00:16:07.421 "auth": { 00:16:07.421 "state": "completed", 00:16:07.421 "digest": "sha384", 00:16:07.421 "dhgroup": "ffdhe2048" 00:16:07.421 } 00:16:07.421 } 00:16:07.421 ]' 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.421 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.679 12:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:08.245 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.245 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:08.245 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.245 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.504 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.763 00:16:08.763 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:08.763 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:08.763 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:09.022 { 00:16:09.022 "cntlid": 63, 00:16:09.022 "qid": 0, 00:16:09.022 "state": "enabled", 00:16:09.022 "listen_address": { 00:16:09.022 "trtype": "RDMA", 00:16:09.022 "adrfam": "IPv4", 00:16:09.022 "traddr": "192.168.100.8", 00:16:09.022 "trsvcid": "4420" 00:16:09.022 }, 00:16:09.022 "peer_address": { 00:16:09.022 "trtype": "RDMA", 00:16:09.022 "adrfam": "IPv4", 00:16:09.022 "traddr": "192.168.100.8", 00:16:09.022 "trsvcid": "50898" 00:16:09.022 }, 00:16:09.022 "auth": { 00:16:09.022 "state": "completed", 00:16:09.022 "digest": "sha384", 00:16:09.022 "dhgroup": "ffdhe2048" 00:16:09.022 } 00:16:09.022 } 00:16:09.022 ]' 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.022 12:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.281 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:09.907 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.167 12:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:10.167 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:10.427 00:16:10.427 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:10.427 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:10.427 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:10.686 { 00:16:10.686 "cntlid": 65, 00:16:10.686 "qid": 0, 00:16:10.686 "state": "enabled", 00:16:10.686 "listen_address": { 00:16:10.686 "trtype": "RDMA", 00:16:10.686 "adrfam": "IPv4", 00:16:10.686 "traddr": "192.168.100.8", 00:16:10.686 "trsvcid": "4420" 00:16:10.686 }, 00:16:10.686 "peer_address": { 00:16:10.686 "trtype": "RDMA", 00:16:10.686 "adrfam": "IPv4", 00:16:10.686 "traddr": "192.168.100.8", 00:16:10.686 "trsvcid": "51101" 00:16:10.686 }, 00:16:10.686 "auth": { 00:16:10.686 "state": "completed", 00:16:10.686 "digest": "sha384", 00:16:10.686 "dhgroup": "ffdhe3072" 00:16:10.686 } 00:16:10.686 } 00:16:10.686 ]' 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.686 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:10.947 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.947 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.947 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.947 12:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:11.514 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.773 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:12.032 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:12.291 00:16:12.291 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:12.291 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:12.291 12:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:12.291 { 00:16:12.291 "cntlid": 67, 00:16:12.291 "qid": 0, 00:16:12.291 "state": "enabled", 00:16:12.291 "listen_address": { 00:16:12.291 "trtype": "RDMA", 00:16:12.291 "adrfam": "IPv4", 00:16:12.291 "traddr": "192.168.100.8", 00:16:12.291 "trsvcid": "4420" 00:16:12.291 }, 00:16:12.291 "peer_address": { 00:16:12.291 "trtype": "RDMA", 00:16:12.291 "adrfam": "IPv4", 00:16:12.291 "traddr": "192.168.100.8", 00:16:12.291 "trsvcid": "51808" 00:16:12.291 }, 00:16:12.291 "auth": { 00:16:12.291 "state": "completed", 00:16:12.291 "digest": "sha384", 00:16:12.291 "dhgroup": "ffdhe3072" 00:16:12.291 } 00:16:12.291 } 00:16:12.291 ]' 00:16:12.291 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.550 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.808 12:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.376 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:13.635 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:13.894 00:16:13.894 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:13.894 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:13.894 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:14.152 { 00:16:14.152 "cntlid": 69, 00:16:14.152 "qid": 0, 00:16:14.152 "state": "enabled", 00:16:14.152 "listen_address": { 00:16:14.152 "trtype": "RDMA", 00:16:14.152 "adrfam": "IPv4", 00:16:14.152 "traddr": "192.168.100.8", 00:16:14.152 "trsvcid": "4420" 00:16:14.152 }, 00:16:14.152 "peer_address": { 00:16:14.152 "trtype": "RDMA", 00:16:14.152 "adrfam": "IPv4", 00:16:14.152 "traddr": "192.168.100.8", 00:16:14.152 "trsvcid": "37352" 00:16:14.152 }, 00:16:14.152 "auth": { 00:16:14.152 "state": "completed", 00:16:14.152 "digest": "sha384", 00:16:14.152 "dhgroup": "ffdhe3072" 00:16:14.152 } 00:16:14.152 } 00:16:14.152 ]' 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.152 12:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:14.152 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.152 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.152 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.411 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:14.980 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.239 12:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.239 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.504 00:16:15.504 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:15.504 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.504 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:15.760 { 00:16:15.760 "cntlid": 71, 00:16:15.760 "qid": 0, 00:16:15.760 "state": "enabled", 00:16:15.760 "listen_address": { 00:16:15.760 "trtype": "RDMA", 00:16:15.760 "adrfam": "IPv4", 00:16:15.760 "traddr": "192.168.100.8", 00:16:15.760 "trsvcid": "4420" 00:16:15.760 }, 00:16:15.760 "peer_address": { 00:16:15.760 "trtype": "RDMA", 00:16:15.760 "adrfam": "IPv4", 00:16:15.760 "traddr": "192.168.100.8", 00:16:15.760 "trsvcid": "45344" 00:16:15.760 }, 00:16:15.760 "auth": { 00:16:15.760 "state": "completed", 00:16:15.760 "digest": "sha384", 00:16:15.760 "dhgroup": "ffdhe3072" 00:16:15.760 } 00:16:15.760 } 00:16:15.760 ]' 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.760 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.018 12:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:16.584 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.843 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:17.102 12:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:17.360 00:16:17.360 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:17.360 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:17.360 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:17.619 { 00:16:17.619 "cntlid": 73, 00:16:17.619 "qid": 0, 00:16:17.619 "state": "enabled", 00:16:17.619 "listen_address": { 00:16:17.619 "trtype": "RDMA", 00:16:17.619 "adrfam": "IPv4", 00:16:17.619 "traddr": "192.168.100.8", 00:16:17.619 "trsvcid": "4420" 00:16:17.619 }, 00:16:17.619 "peer_address": { 00:16:17.619 "trtype": "RDMA", 00:16:17.619 "adrfam": "IPv4", 00:16:17.619 "traddr": "192.168.100.8", 00:16:17.619 "trsvcid": "56130" 00:16:17.619 }, 00:16:17.619 "auth": { 00:16:17.619 "state": "completed", 00:16:17.619 "digest": "sha384", 00:16:17.619 "dhgroup": "ffdhe4096" 00:16:17.619 } 00:16:17.619 } 00:16:17.619 ]' 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.619 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.878 12:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.446 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.704 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:18.705 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:18.964 00:16:18.964 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:18.964 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:18.964 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:19.222 { 00:16:19.222 "cntlid": 75, 00:16:19.222 "qid": 0, 00:16:19.222 "state": "enabled", 00:16:19.222 "listen_address": { 00:16:19.222 "trtype": "RDMA", 00:16:19.222 "adrfam": "IPv4", 00:16:19.222 "traddr": "192.168.100.8", 00:16:19.222 "trsvcid": "4420" 00:16:19.222 }, 00:16:19.222 "peer_address": { 00:16:19.222 "trtype": "RDMA", 00:16:19.222 "adrfam": "IPv4", 00:16:19.222 "traddr": "192.168.100.8", 00:16:19.222 "trsvcid": "44634" 00:16:19.222 }, 00:16:19.222 "auth": { 00:16:19.222 "state": "completed", 00:16:19.222 "digest": "sha384", 00:16:19.222 "dhgroup": "ffdhe4096" 00:16:19.222 } 00:16:19.222 } 00:16:19.222 ]' 00:16:19.222 12:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.222 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.480 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:20.048 12:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.307 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:20.567 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:20.826 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:20.826 { 00:16:20.826 "cntlid": 77, 00:16:20.826 "qid": 0, 00:16:20.826 "state": "enabled", 00:16:20.826 "listen_address": { 00:16:20.826 "trtype": "RDMA", 00:16:20.826 "adrfam": "IPv4", 00:16:20.826 "traddr": "192.168.100.8", 00:16:20.826 "trsvcid": "4420" 00:16:20.826 }, 00:16:20.826 "peer_address": { 00:16:20.826 "trtype": "RDMA", 00:16:20.826 "adrfam": "IPv4", 00:16:20.826 "traddr": "192.168.100.8", 00:16:20.826 "trsvcid": "41715" 00:16:20.826 }, 00:16:20.826 "auth": { 00:16:20.826 "state": "completed", 00:16:20.826 "digest": "sha384", 00:16:20.826 "dhgroup": "ffdhe4096" 00:16:20.826 } 00:16:20.826 } 00:16:20.826 ]' 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.826 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:21.084 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.084 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:21.084 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.084 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.085 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.344 12:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.912 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.172 12:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.431 00:16:22.431 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:22.431 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:22.431 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:22.690 { 00:16:22.690 "cntlid": 79, 00:16:22.690 "qid": 0, 00:16:22.690 "state": "enabled", 00:16:22.690 "listen_address": { 00:16:22.690 "trtype": "RDMA", 00:16:22.690 "adrfam": "IPv4", 00:16:22.690 "traddr": "192.168.100.8", 00:16:22.690 "trsvcid": "4420" 00:16:22.690 }, 00:16:22.690 "peer_address": { 00:16:22.690 "trtype": "RDMA", 00:16:22.690 "adrfam": "IPv4", 00:16:22.690 "traddr": "192.168.100.8", 00:16:22.690 "trsvcid": "46513" 00:16:22.690 }, 00:16:22.690 "auth": { 00:16:22.690 "state": "completed", 00:16:22.690 "digest": "sha384", 00:16:22.690 "dhgroup": "ffdhe4096" 00:16:22.690 } 00:16:22.690 } 00:16:22.690 ]' 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.690 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.949 13:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.517 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.774 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:24.339 00:16:24.339 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:24.339 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:24.339 13:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:24.339 { 00:16:24.339 "cntlid": 81, 00:16:24.339 "qid": 0, 00:16:24.339 "state": "enabled", 00:16:24.339 "listen_address": { 00:16:24.339 "trtype": "RDMA", 00:16:24.339 "adrfam": "IPv4", 00:16:24.339 "traddr": "192.168.100.8", 00:16:24.339 "trsvcid": "4420" 00:16:24.339 }, 00:16:24.339 "peer_address": { 00:16:24.339 "trtype": "RDMA", 00:16:24.339 "adrfam": "IPv4", 00:16:24.339 "traddr": "192.168.100.8", 00:16:24.339 "trsvcid": "42717" 00:16:24.339 }, 00:16:24.339 "auth": { 00:16:24.339 "state": "completed", 00:16:24.339 "digest": "sha384", 00:16:24.339 "dhgroup": "ffdhe6144" 00:16:24.339 } 00:16:24.339 } 00:16:24.339 ]' 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.339 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.596 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.596 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.596 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.596 13:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:25.180 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.438 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.696 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.955 00:16:25.955 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:25.955 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.955 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:26.217 { 00:16:26.217 "cntlid": 83, 00:16:26.217 "qid": 0, 00:16:26.217 "state": "enabled", 00:16:26.217 "listen_address": { 00:16:26.217 "trtype": "RDMA", 00:16:26.217 "adrfam": "IPv4", 00:16:26.217 "traddr": "192.168.100.8", 00:16:26.217 "trsvcid": "4420" 00:16:26.217 }, 00:16:26.217 "peer_address": { 00:16:26.217 "trtype": "RDMA", 00:16:26.217 "adrfam": "IPv4", 00:16:26.217 "traddr": "192.168.100.8", 00:16:26.217 "trsvcid": "50111" 00:16:26.217 }, 00:16:26.217 "auth": { 00:16:26.217 "state": "completed", 00:16:26.217 "digest": "sha384", 00:16:26.217 "dhgroup": "ffdhe6144" 00:16:26.217 } 00:16:26.217 } 00:16:26.217 ]' 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:26.217 13:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.217 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:26.217 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.217 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.217 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.475 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:27.042 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.301 13:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.301 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:27.872 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.872 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:27.872 { 00:16:27.872 "cntlid": 85, 00:16:27.872 "qid": 0, 00:16:27.872 "state": "enabled", 00:16:27.872 "listen_address": { 00:16:27.872 "trtype": "RDMA", 00:16:27.872 "adrfam": "IPv4", 00:16:27.872 "traddr": "192.168.100.8", 00:16:27.872 "trsvcid": "4420" 00:16:27.872 }, 00:16:27.872 "peer_address": { 00:16:27.872 "trtype": "RDMA", 00:16:27.872 "adrfam": "IPv4", 00:16:27.872 "traddr": "192.168.100.8", 00:16:27.872 "trsvcid": "47774" 00:16:27.872 }, 00:16:27.872 "auth": { 00:16:27.872 "state": "completed", 00:16:27.873 "digest": "sha384", 00:16:27.873 "dhgroup": "ffdhe6144" 00:16:27.873 } 00:16:27.873 } 00:16:27.873 ]' 00:16:27.873 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:27.873 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.873 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:28.131 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.131 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:28.131 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.131 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.131 13:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.131 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.068 13:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.636 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:29.636 { 00:16:29.636 "cntlid": 87, 00:16:29.636 "qid": 0, 00:16:29.636 "state": "enabled", 00:16:29.636 "listen_address": { 00:16:29.636 "trtype": "RDMA", 00:16:29.636 "adrfam": "IPv4", 00:16:29.636 "traddr": "192.168.100.8", 00:16:29.636 "trsvcid": "4420" 00:16:29.636 }, 00:16:29.636 "peer_address": { 00:16:29.636 "trtype": "RDMA", 00:16:29.636 "adrfam": "IPv4", 00:16:29.636 "traddr": "192.168.100.8", 00:16:29.636 "trsvcid": "45681" 00:16:29.636 }, 00:16:29.636 "auth": { 00:16:29.636 "state": "completed", 00:16:29.636 "digest": "sha384", 00:16:29.636 "dhgroup": "ffdhe6144" 00:16:29.636 } 00:16:29.636 } 00:16:29.636 ]' 00:16:29.636 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.894 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.152 13:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.720 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:30.978 13:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:31.547 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:31.547 { 00:16:31.547 "cntlid": 89, 00:16:31.547 "qid": 0, 00:16:31.547 "state": "enabled", 00:16:31.547 "listen_address": { 00:16:31.547 "trtype": "RDMA", 00:16:31.547 "adrfam": "IPv4", 00:16:31.547 "traddr": "192.168.100.8", 00:16:31.547 "trsvcid": "4420" 00:16:31.547 }, 00:16:31.547 "peer_address": { 00:16:31.547 "trtype": "RDMA", 00:16:31.547 "adrfam": "IPv4", 00:16:31.547 "traddr": "192.168.100.8", 00:16:31.547 "trsvcid": "53323" 00:16:31.547 }, 00:16:31.547 "auth": { 00:16:31.547 "state": "completed", 00:16:31.547 "digest": "sha384", 00:16:31.547 "dhgroup": "ffdhe8192" 00:16:31.547 } 00:16:31.547 } 00:16:31.547 ]' 00:16:31.547 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.806 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.065 13:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.632 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:32.891 13:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:33.460 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.460 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:33.460 { 00:16:33.460 "cntlid": 91, 00:16:33.460 "qid": 0, 00:16:33.460 "state": "enabled", 00:16:33.460 "listen_address": { 00:16:33.460 "trtype": "RDMA", 00:16:33.460 "adrfam": "IPv4", 00:16:33.460 "traddr": "192.168.100.8", 00:16:33.460 "trsvcid": "4420" 00:16:33.460 }, 00:16:33.460 "peer_address": { 00:16:33.460 "trtype": "RDMA", 00:16:33.460 "adrfam": "IPv4", 00:16:33.460 "traddr": "192.168.100.8", 00:16:33.461 "trsvcid": "50189" 00:16:33.461 }, 00:16:33.461 "auth": { 00:16:33.461 "state": "completed", 00:16:33.461 "digest": "sha384", 00:16:33.461 "dhgroup": "ffdhe8192" 00:16:33.461 } 00:16:33.461 } 00:16:33.461 ]' 00:16:33.461 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.722 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.981 13:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.549 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:34.864 13:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:35.163 00:16:35.163 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:35.163 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:35.163 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:35.426 { 00:16:35.426 "cntlid": 93, 00:16:35.426 "qid": 0, 00:16:35.426 "state": "enabled", 00:16:35.426 "listen_address": { 00:16:35.426 "trtype": "RDMA", 00:16:35.426 "adrfam": "IPv4", 00:16:35.426 "traddr": "192.168.100.8", 00:16:35.426 "trsvcid": "4420" 00:16:35.426 }, 00:16:35.426 "peer_address": { 00:16:35.426 "trtype": "RDMA", 00:16:35.426 "adrfam": "IPv4", 00:16:35.426 "traddr": "192.168.100.8", 00:16:35.426 "trsvcid": "40184" 00:16:35.426 }, 00:16:35.426 "auth": { 00:16:35.426 "state": "completed", 00:16:35.426 "digest": "sha384", 00:16:35.426 "dhgroup": "ffdhe8192" 00:16:35.426 } 00:16:35.426 } 00:16:35.426 ]' 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.426 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.685 13:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.621 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.189 00:16:37.189 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:37.189 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:37.189 13:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:37.448 { 00:16:37.448 "cntlid": 95, 00:16:37.448 "qid": 0, 00:16:37.448 "state": "enabled", 00:16:37.448 "listen_address": { 00:16:37.448 "trtype": "RDMA", 00:16:37.448 "adrfam": "IPv4", 00:16:37.448 "traddr": "192.168.100.8", 00:16:37.448 "trsvcid": "4420" 00:16:37.448 }, 00:16:37.448 "peer_address": { 00:16:37.448 "trtype": "RDMA", 00:16:37.448 "adrfam": "IPv4", 00:16:37.448 "traddr": "192.168.100.8", 00:16:37.448 "trsvcid": "44068" 00:16:37.448 }, 00:16:37.448 "auth": { 00:16:37.448 "state": "completed", 00:16:37.448 "digest": "sha384", 00:16:37.448 "dhgroup": "ffdhe8192" 00:16:37.448 } 00:16:37.448 } 00:16:37.448 ]' 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.448 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.707 13:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:38.273 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:38.532 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:38.790 00:16:38.790 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:38.790 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:38.790 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:39.048 { 00:16:39.048 "cntlid": 97, 00:16:39.048 "qid": 0, 00:16:39.048 "state": "enabled", 00:16:39.048 "listen_address": { 00:16:39.048 "trtype": "RDMA", 00:16:39.048 "adrfam": "IPv4", 00:16:39.048 "traddr": "192.168.100.8", 00:16:39.048 "trsvcid": "4420" 00:16:39.048 }, 00:16:39.048 "peer_address": { 00:16:39.048 "trtype": "RDMA", 00:16:39.048 "adrfam": "IPv4", 00:16:39.048 "traddr": "192.168.100.8", 00:16:39.048 "trsvcid": "55150" 00:16:39.048 }, 00:16:39.048 "auth": { 00:16:39.048 "state": "completed", 00:16:39.048 "digest": "sha512", 00:16:39.048 "dhgroup": "null" 00:16:39.048 } 00:16:39.048 } 00:16:39.048 ]' 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:39.048 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:39.306 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.306 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.306 13:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.307 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:39.884 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.142 13:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:40.400 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:40.400 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:40.676 { 00:16:40.676 "cntlid": 99, 00:16:40.676 "qid": 0, 00:16:40.676 "state": "enabled", 00:16:40.676 "listen_address": { 00:16:40.676 "trtype": "RDMA", 00:16:40.676 "adrfam": "IPv4", 00:16:40.676 "traddr": "192.168.100.8", 00:16:40.676 "trsvcid": "4420" 00:16:40.676 }, 00:16:40.676 "peer_address": { 00:16:40.676 "trtype": "RDMA", 00:16:40.676 "adrfam": "IPv4", 00:16:40.676 "traddr": "192.168.100.8", 00:16:40.676 "trsvcid": "38345" 00:16:40.676 }, 00:16:40.676 "auth": { 00:16:40.676 "state": "completed", 00:16:40.676 "digest": "sha512", 00:16:40.676 "dhgroup": "null" 00:16:40.676 } 00:16:40.676 } 00:16:40.676 ]' 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.676 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.935 13:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:41.870 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:41.871 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:42.128 00:16:42.128 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:42.128 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:42.128 13:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:42.386 { 00:16:42.386 "cntlid": 101, 00:16:42.386 "qid": 0, 00:16:42.386 "state": "enabled", 00:16:42.386 "listen_address": { 00:16:42.386 "trtype": "RDMA", 00:16:42.386 "adrfam": "IPv4", 00:16:42.386 "traddr": "192.168.100.8", 00:16:42.386 "trsvcid": "4420" 00:16:42.386 }, 00:16:42.386 "peer_address": { 00:16:42.386 "trtype": "RDMA", 00:16:42.386 "adrfam": "IPv4", 00:16:42.386 "traddr": "192.168.100.8", 00:16:42.386 "trsvcid": "49808" 00:16:42.386 }, 00:16:42.386 "auth": { 00:16:42.386 "state": "completed", 00:16:42.386 "digest": "sha512", 00:16:42.386 "dhgroup": "null" 00:16:42.386 } 00:16:42.386 } 00:16:42.386 ]' 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:42.386 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:42.645 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.645 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.645 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.645 13:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:43.214 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.473 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:43.731 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.732 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.732 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.732 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.990 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:43.990 { 00:16:43.990 "cntlid": 103, 00:16:43.990 "qid": 0, 00:16:43.990 "state": "enabled", 00:16:43.990 "listen_address": { 00:16:43.990 "trtype": "RDMA", 00:16:43.990 "adrfam": "IPv4", 00:16:43.990 "traddr": "192.168.100.8", 00:16:43.990 "trsvcid": "4420" 00:16:43.990 }, 00:16:43.990 "peer_address": { 00:16:43.990 "trtype": "RDMA", 00:16:43.990 "adrfam": "IPv4", 00:16:43.990 "traddr": "192.168.100.8", 00:16:43.990 "trsvcid": "35504" 00:16:43.990 }, 00:16:43.990 "auth": { 00:16:43.990 "state": "completed", 00:16:43.990 "digest": "sha512", 00:16:43.990 "dhgroup": "null" 00:16:43.990 } 00:16:43.990 } 00:16:43.990 ]' 00:16:43.990 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.249 13:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.508 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.074 13:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:45.332 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:45.591 00:16:45.591 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:45.591 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:45.591 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:45.850 { 00:16:45.850 "cntlid": 105, 00:16:45.850 "qid": 0, 00:16:45.850 "state": "enabled", 00:16:45.850 "listen_address": { 00:16:45.850 "trtype": "RDMA", 00:16:45.850 "adrfam": "IPv4", 00:16:45.850 "traddr": "192.168.100.8", 00:16:45.850 "trsvcid": "4420" 00:16:45.850 }, 00:16:45.850 "peer_address": { 00:16:45.850 "trtype": "RDMA", 00:16:45.850 "adrfam": "IPv4", 00:16:45.850 "traddr": "192.168.100.8", 00:16:45.850 "trsvcid": "35222" 00:16:45.850 }, 00:16:45.850 "auth": { 00:16:45.850 "state": "completed", 00:16:45.850 "digest": "sha512", 00:16:45.850 "dhgroup": "ffdhe2048" 00:16:45.850 } 00:16:45.850 } 00:16:45.850 ]' 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.850 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:45.851 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.851 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:45.851 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.851 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.851 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.109 13:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.675 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:46.934 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:47.200 00:16:47.200 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:47.200 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:47.200 13:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:47.463 { 00:16:47.463 "cntlid": 107, 00:16:47.463 "qid": 0, 00:16:47.463 "state": "enabled", 00:16:47.463 "listen_address": { 00:16:47.463 "trtype": "RDMA", 00:16:47.463 "adrfam": "IPv4", 00:16:47.463 "traddr": "192.168.100.8", 00:16:47.463 "trsvcid": "4420" 00:16:47.463 }, 00:16:47.463 "peer_address": { 00:16:47.463 "trtype": "RDMA", 00:16:47.463 "adrfam": "IPv4", 00:16:47.463 "traddr": "192.168.100.8", 00:16:47.463 "trsvcid": "50865" 00:16:47.463 }, 00:16:47.463 "auth": { 00:16:47.463 "state": "completed", 00:16:47.463 "digest": "sha512", 00:16:47.463 "dhgroup": "ffdhe2048" 00:16:47.463 } 00:16:47.463 } 00:16:47.463 ]' 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.463 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.721 13:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:48.289 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.547 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:48.548 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.548 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.548 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.548 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.548 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.816 00:16:48.816 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:48.816 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:48.816 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.075 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.075 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.075 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.075 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.075 13:00:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:49.076 { 00:16:49.076 "cntlid": 109, 00:16:49.076 "qid": 0, 00:16:49.076 "state": "enabled", 00:16:49.076 "listen_address": { 00:16:49.076 "trtype": "RDMA", 00:16:49.076 "adrfam": "IPv4", 00:16:49.076 "traddr": "192.168.100.8", 00:16:49.076 "trsvcid": "4420" 00:16:49.076 }, 00:16:49.076 "peer_address": { 00:16:49.076 "trtype": "RDMA", 00:16:49.076 "adrfam": "IPv4", 00:16:49.076 "traddr": "192.168.100.8", 00:16:49.076 "trsvcid": "39759" 00:16:49.076 }, 00:16:49.076 "auth": { 00:16:49.076 "state": "completed", 00:16:49.076 "digest": "sha512", 00:16:49.076 "dhgroup": "ffdhe2048" 00:16:49.076 } 00:16:49.076 } 00:16:49.076 ]' 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.076 13:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.334 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:49.900 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:16:50.158 13:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.158 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.422 00:16:50.422 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:50.422 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.422 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:50.682 { 00:16:50.682 "cntlid": 111, 00:16:50.682 "qid": 0, 00:16:50.682 "state": "enabled", 00:16:50.682 "listen_address": { 00:16:50.682 "trtype": "RDMA", 00:16:50.682 "adrfam": "IPv4", 00:16:50.682 "traddr": "192.168.100.8", 00:16:50.682 "trsvcid": "4420" 00:16:50.682 }, 00:16:50.682 "peer_address": { 00:16:50.682 "trtype": "RDMA", 00:16:50.682 "adrfam": "IPv4", 00:16:50.682 "traddr": "192.168.100.8", 00:16:50.682 "trsvcid": "40500" 00:16:50.682 }, 00:16:50.682 "auth": { 00:16:50.682 "state": "completed", 00:16:50.682 "digest": "sha512", 00:16:50.682 "dhgroup": "ffdhe2048" 00:16:50.682 } 00:16:50.682 } 00:16:50.682 ]' 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.682 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:50.940 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.940 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.940 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.940 13:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:51.507 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.765 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.024 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.282 00:16:52.282 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.282 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.282 13:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.282 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.282 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.282 13:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.282 13:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.282 13:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.540 { 00:16:52.540 "cntlid": 113, 00:16:52.540 "qid": 0, 00:16:52.540 "state": "enabled", 00:16:52.540 "listen_address": { 00:16:52.540 "trtype": "RDMA", 00:16:52.540 "adrfam": "IPv4", 00:16:52.540 "traddr": "192.168.100.8", 00:16:52.540 "trsvcid": "4420" 00:16:52.540 }, 00:16:52.540 "peer_address": { 00:16:52.540 "trtype": "RDMA", 00:16:52.540 "adrfam": "IPv4", 00:16:52.540 "traddr": "192.168.100.8", 00:16:52.540 "trsvcid": "33127" 00:16:52.540 }, 00:16:52.540 "auth": { 00:16:52.540 "state": "completed", 00:16:52.540 "digest": "sha512", 00:16:52.540 "dhgroup": "ffdhe3072" 00:16:52.540 } 00:16:52.540 } 00:16:52.540 ]' 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.540 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.799 13:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.366 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.625 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.884 00:16:53.884 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:53.884 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:53.884 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:54.143 { 00:16:54.143 "cntlid": 115, 00:16:54.143 "qid": 0, 00:16:54.143 "state": "enabled", 00:16:54.143 "listen_address": { 00:16:54.143 "trtype": "RDMA", 00:16:54.143 "adrfam": "IPv4", 00:16:54.143 "traddr": "192.168.100.8", 00:16:54.143 "trsvcid": "4420" 00:16:54.143 }, 00:16:54.143 "peer_address": { 00:16:54.143 "trtype": "RDMA", 00:16:54.143 "adrfam": "IPv4", 00:16:54.143 "traddr": "192.168.100.8", 00:16:54.143 "trsvcid": "42496" 00:16:54.143 }, 00:16:54.143 "auth": { 00:16:54.143 "state": "completed", 00:16:54.143 "digest": "sha512", 00:16:54.143 "dhgroup": "ffdhe3072" 00:16:54.143 } 00:16:54.143 } 00:16:54.143 ]' 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.143 13:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.402 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:16:54.972 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.973 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:54.973 13:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.973 13:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.234 13:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.234 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:55.234 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.234 13:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.234 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.492 00:16:55.492 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.492 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.492 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:55.750 { 00:16:55.750 "cntlid": 117, 00:16:55.750 "qid": 0, 00:16:55.750 "state": "enabled", 00:16:55.750 "listen_address": { 00:16:55.750 "trtype": "RDMA", 00:16:55.750 "adrfam": "IPv4", 00:16:55.750 "traddr": "192.168.100.8", 00:16:55.750 "trsvcid": "4420" 00:16:55.750 }, 00:16:55.750 "peer_address": { 00:16:55.750 "trtype": "RDMA", 00:16:55.750 "adrfam": "IPv4", 00:16:55.750 "traddr": "192.168.100.8", 00:16:55.750 "trsvcid": "52074" 00:16:55.750 }, 00:16:55.750 "auth": { 00:16:55.750 "state": "completed", 00:16:55.750 "digest": "sha512", 00:16:55.750 "dhgroup": "ffdhe3072" 00:16:55.750 } 00:16:55.750 } 00:16:55.750 ]' 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.750 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.008 13:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:16:56.578 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.837 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.096 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.096 00:16:57.354 13:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:57.355 { 00:16:57.355 "cntlid": 119, 00:16:57.355 "qid": 0, 00:16:57.355 "state": "enabled", 00:16:57.355 "listen_address": { 00:16:57.355 "trtype": "RDMA", 00:16:57.355 "adrfam": "IPv4", 00:16:57.355 "traddr": "192.168.100.8", 00:16:57.355 "trsvcid": "4420" 00:16:57.355 }, 00:16:57.355 "peer_address": { 00:16:57.355 "trtype": "RDMA", 00:16:57.355 "adrfam": "IPv4", 00:16:57.355 "traddr": "192.168.100.8", 00:16:57.355 "trsvcid": "50251" 00:16:57.355 }, 00:16:57.355 "auth": { 00:16:57.355 "state": "completed", 00:16:57.355 "digest": "sha512", 00:16:57.355 "dhgroup": "ffdhe3072" 00:16:57.355 } 00:16:57.355 } 00:16:57.355 ]' 00:16:57.355 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.613 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.872 13:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.465 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:58.736 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:58.994 00:16:58.994 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.994 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.994 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:59.251 { 00:16:59.251 "cntlid": 121, 00:16:59.251 "qid": 0, 00:16:59.251 "state": "enabled", 00:16:59.251 "listen_address": { 00:16:59.251 "trtype": "RDMA", 00:16:59.251 "adrfam": "IPv4", 00:16:59.251 "traddr": "192.168.100.8", 00:16:59.251 "trsvcid": "4420" 00:16:59.251 }, 00:16:59.251 "peer_address": { 00:16:59.251 "trtype": "RDMA", 00:16:59.251 "adrfam": "IPv4", 00:16:59.251 "traddr": "192.168.100.8", 00:16:59.251 "trsvcid": "41464" 00:16:59.251 }, 00:16:59.251 "auth": { 00:16:59.251 "state": "completed", 00:16:59.251 "digest": "sha512", 00:16:59.251 "dhgroup": "ffdhe4096" 00:16:59.251 } 00:16:59.251 } 00:16:59.251 ]' 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.251 13:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:59.251 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.251 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.251 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.510 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.078 13:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.336 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.594 00:17:00.594 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.594 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.594 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:00.853 { 00:17:00.853 "cntlid": 123, 00:17:00.853 "qid": 0, 00:17:00.853 "state": "enabled", 00:17:00.853 "listen_address": { 00:17:00.853 "trtype": "RDMA", 00:17:00.853 "adrfam": "IPv4", 00:17:00.853 "traddr": "192.168.100.8", 00:17:00.853 "trsvcid": "4420" 00:17:00.853 }, 00:17:00.853 "peer_address": { 00:17:00.853 "trtype": "RDMA", 00:17:00.853 "adrfam": "IPv4", 00:17:00.853 "traddr": "192.168.100.8", 00:17:00.853 "trsvcid": "41441" 00:17:00.853 }, 00:17:00.853 "auth": { 00:17:00.853 "state": "completed", 00:17:00.853 "digest": "sha512", 00:17:00.853 "dhgroup": "ffdhe4096" 00:17:00.853 } 00:17:00.853 } 00:17:00.853 ]' 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.853 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:01.111 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.111 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.111 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.111 13:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:17:01.679 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.938 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.197 13:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.455 00:17:02.455 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.455 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.455 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:02.713 { 00:17:02.713 "cntlid": 125, 00:17:02.713 "qid": 0, 00:17:02.713 "state": "enabled", 00:17:02.713 "listen_address": { 00:17:02.713 "trtype": "RDMA", 00:17:02.713 "adrfam": "IPv4", 00:17:02.713 "traddr": "192.168.100.8", 00:17:02.713 "trsvcid": "4420" 00:17:02.713 }, 00:17:02.713 "peer_address": { 00:17:02.713 "trtype": "RDMA", 00:17:02.713 "adrfam": "IPv4", 00:17:02.713 "traddr": "192.168.100.8", 00:17:02.713 "trsvcid": "40023" 00:17:02.713 }, 00:17:02.713 "auth": { 00:17:02.713 "state": "completed", 00:17:02.713 "digest": "sha512", 00:17:02.713 "dhgroup": "ffdhe4096" 00:17:02.713 } 00:17:02.713 } 00:17:02.713 ]' 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.713 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.973 13:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:17:03.541 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.541 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:03.541 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.541 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.799 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.056 00:17:04.056 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.056 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.056 13:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:04.315 { 00:17:04.315 "cntlid": 127, 00:17:04.315 "qid": 0, 00:17:04.315 "state": "enabled", 00:17:04.315 "listen_address": { 00:17:04.315 "trtype": "RDMA", 00:17:04.315 "adrfam": "IPv4", 00:17:04.315 "traddr": "192.168.100.8", 00:17:04.315 "trsvcid": "4420" 00:17:04.315 }, 00:17:04.315 "peer_address": { 00:17:04.315 "trtype": "RDMA", 00:17:04.315 "adrfam": "IPv4", 00:17:04.315 "traddr": "192.168.100.8", 00:17:04.315 "trsvcid": "38955" 00:17:04.315 }, 00:17:04.315 "auth": { 00:17:04.315 "state": "completed", 00:17:04.315 "digest": "sha512", 00:17:04.315 "dhgroup": "ffdhe4096" 00:17:04.315 } 00:17:04.315 } 00:17:04.315 ]' 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.315 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:04.573 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.573 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.573 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.573 13:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:17:05.142 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.400 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.659 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.918 00:17:05.918 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:05.918 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:05.918 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:06.177 { 00:17:06.177 "cntlid": 129, 00:17:06.177 "qid": 0, 00:17:06.177 "state": "enabled", 00:17:06.177 "listen_address": { 00:17:06.177 "trtype": "RDMA", 00:17:06.177 "adrfam": "IPv4", 00:17:06.177 "traddr": "192.168.100.8", 00:17:06.177 "trsvcid": "4420" 00:17:06.177 }, 00:17:06.177 "peer_address": { 00:17:06.177 "trtype": "RDMA", 00:17:06.177 "adrfam": "IPv4", 00:17:06.177 "traddr": "192.168.100.8", 00:17:06.177 "trsvcid": "34110" 00:17:06.177 }, 00:17:06.177 "auth": { 00:17:06.177 "state": "completed", 00:17:06.177 "digest": "sha512", 00:17:06.177 "dhgroup": "ffdhe6144" 00:17:06.177 } 00:17:06.177 } 00:17:06.177 ]' 00:17:06.177 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.178 13:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.437 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.004 13:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:07.263 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:07.522 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:07.782 { 00:17:07.782 "cntlid": 131, 00:17:07.782 "qid": 0, 00:17:07.782 "state": "enabled", 00:17:07.782 "listen_address": { 00:17:07.782 "trtype": "RDMA", 00:17:07.782 "adrfam": "IPv4", 00:17:07.782 "traddr": "192.168.100.8", 00:17:07.782 "trsvcid": "4420" 00:17:07.782 }, 00:17:07.782 "peer_address": { 00:17:07.782 "trtype": "RDMA", 00:17:07.782 "adrfam": "IPv4", 00:17:07.782 "traddr": "192.168.100.8", 00:17:07.782 "trsvcid": "57157" 00:17:07.782 }, 00:17:07.782 "auth": { 00:17:07.782 "state": "completed", 00:17:07.782 "digest": "sha512", 00:17:07.782 "dhgroup": "ffdhe6144" 00:17:07.782 } 00:17:07.782 } 00:17:07.782 ]' 00:17:07.782 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.042 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.301 13:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.868 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:09.126 13:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:09.384 00:17:09.384 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:09.384 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:09.384 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:09.641 { 00:17:09.641 "cntlid": 133, 00:17:09.641 "qid": 0, 00:17:09.641 "state": "enabled", 00:17:09.641 "listen_address": { 00:17:09.641 "trtype": "RDMA", 00:17:09.641 "adrfam": "IPv4", 00:17:09.641 "traddr": "192.168.100.8", 00:17:09.641 "trsvcid": "4420" 00:17:09.641 }, 00:17:09.641 "peer_address": { 00:17:09.641 "trtype": "RDMA", 00:17:09.641 "adrfam": "IPv4", 00:17:09.641 "traddr": "192.168.100.8", 00:17:09.641 "trsvcid": "40964" 00:17:09.641 }, 00:17:09.641 "auth": { 00:17:09.641 "state": "completed", 00:17:09.641 "digest": "sha512", 00:17:09.641 "dhgroup": "ffdhe6144" 00:17:09.641 } 00:17:09.641 } 00:17:09.641 ]' 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.641 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.899 13:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:17:10.465 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.723 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:10.982 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.982 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 13:00:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.982 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.982 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.241 00:17:11.241 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:11.241 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:11.241 13:00:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:11.499 { 00:17:11.499 "cntlid": 135, 00:17:11.499 "qid": 0, 00:17:11.499 "state": "enabled", 00:17:11.499 "listen_address": { 00:17:11.499 "trtype": "RDMA", 00:17:11.499 "adrfam": "IPv4", 00:17:11.499 "traddr": "192.168.100.8", 00:17:11.499 "trsvcid": "4420" 00:17:11.499 }, 00:17:11.499 "peer_address": { 00:17:11.499 "trtype": "RDMA", 00:17:11.499 "adrfam": "IPv4", 00:17:11.499 "traddr": "192.168.100.8", 00:17:11.499 "trsvcid": "50948" 00:17:11.499 }, 00:17:11.499 "auth": { 00:17:11.499 "state": "completed", 00:17:11.499 "digest": "sha512", 00:17:11.499 "dhgroup": "ffdhe6144" 00:17:11.499 } 00:17:11.499 } 00:17:11.499 ]' 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.499 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.757 13:00:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.324 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:12.584 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:13.152 00:17:13.152 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.152 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.152 13:00:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:13.411 { 00:17:13.411 "cntlid": 137, 00:17:13.411 "qid": 0, 00:17:13.411 "state": "enabled", 00:17:13.411 "listen_address": { 00:17:13.411 "trtype": "RDMA", 00:17:13.411 "adrfam": "IPv4", 00:17:13.411 "traddr": "192.168.100.8", 00:17:13.411 "trsvcid": "4420" 00:17:13.411 }, 00:17:13.411 "peer_address": { 00:17:13.411 "trtype": "RDMA", 00:17:13.411 "adrfam": "IPv4", 00:17:13.411 "traddr": "192.168.100.8", 00:17:13.411 "trsvcid": "48847" 00:17:13.411 }, 00:17:13.411 "auth": { 00:17:13.411 "state": "completed", 00:17:13.411 "digest": "sha512", 00:17:13.411 "dhgroup": "ffdhe8192" 00:17:13.411 } 00:17:13.411 } 00:17:13.411 ]' 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.411 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.669 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:17:14.234 13:00:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.234 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:14.492 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:15.057 00:17:15.057 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.057 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.057 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.315 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.315 13:00:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.315 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.315 13:00:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:15.315 { 00:17:15.315 "cntlid": 139, 00:17:15.315 "qid": 0, 00:17:15.315 "state": "enabled", 00:17:15.315 "listen_address": { 00:17:15.315 "trtype": "RDMA", 00:17:15.315 "adrfam": "IPv4", 00:17:15.315 "traddr": "192.168.100.8", 00:17:15.315 "trsvcid": "4420" 00:17:15.315 }, 00:17:15.315 "peer_address": { 00:17:15.315 "trtype": "RDMA", 00:17:15.315 "adrfam": "IPv4", 00:17:15.315 "traddr": "192.168.100.8", 00:17:15.315 "trsvcid": "48698" 00:17:15.315 }, 00:17:15.315 "auth": { 00:17:15.315 "state": "completed", 00:17:15.315 "digest": "sha512", 00:17:15.315 "dhgroup": "ffdhe8192" 00:17:15.315 } 00:17:15.315 } 00:17:15.315 ]' 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.315 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.573 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZDYyYWJkZDZjZjJmNWZiNTdmNGUxNDEzOGEyMmIzNDNW+oLv: 00:17:16.140 13:00:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:16.399 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:16.966 00:17:16.966 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:16.966 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:16.966 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:17.225 { 00:17:17.225 "cntlid": 141, 00:17:17.225 "qid": 0, 00:17:17.225 "state": "enabled", 00:17:17.225 "listen_address": { 00:17:17.225 "trtype": "RDMA", 00:17:17.225 "adrfam": "IPv4", 00:17:17.225 "traddr": "192.168.100.8", 00:17:17.225 "trsvcid": "4420" 00:17:17.225 }, 00:17:17.225 "peer_address": { 00:17:17.225 "trtype": "RDMA", 00:17:17.225 "adrfam": "IPv4", 00:17:17.225 "traddr": "192.168.100.8", 00:17:17.225 "trsvcid": "36968" 00:17:17.225 }, 00:17:17.225 "auth": { 00:17:17.225 "state": "completed", 00:17:17.225 "digest": "sha512", 00:17:17.225 "dhgroup": "ffdhe8192" 00:17:17.225 } 00:17:17.225 } 00:17:17.225 ]' 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.225 13:00:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:17.225 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.225 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.225 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.483 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NjJlMjA0Zjk0NDY3NmQ5YzU5ZmZlMGIwNDE1ZDk3ODVhYzNlZWZhMWIwNWQ5YTEwosDlrg==: 00:17:18.048 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.307 13:00:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.307 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.874 00:17:18.874 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.874 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.874 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:19.132 { 00:17:19.132 "cntlid": 143, 00:17:19.132 "qid": 0, 00:17:19.132 "state": "enabled", 00:17:19.132 "listen_address": { 00:17:19.132 "trtype": "RDMA", 00:17:19.132 "adrfam": "IPv4", 00:17:19.132 "traddr": "192.168.100.8", 00:17:19.132 "trsvcid": "4420" 00:17:19.132 }, 00:17:19.132 "peer_address": { 00:17:19.132 "trtype": "RDMA", 00:17:19.132 "adrfam": "IPv4", 00:17:19.132 "traddr": "192.168.100.8", 00:17:19.132 "trsvcid": "56218" 00:17:19.132 }, 00:17:19.132 "auth": { 00:17:19.132 "state": "completed", 00:17:19.132 "digest": "sha512", 00:17:19.132 "dhgroup": "ffdhe8192" 00:17:19.132 } 00:17:19.132 } 00:17:19.132 ]' 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.132 13:00:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.390 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:NWViOTUyZjMwMTQ5YWYyNjliZDk5NDBkZDQxMzI3NzUwOGRiMzk4ZjQyZmZkNjYxYTAwYzVhMGM5Mzk5YzVjMJNW0Ww=: 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:19.957 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.216 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.216 13:00:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:20.216 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:20.783 00:17:20.783 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.783 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.783 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:21.041 { 00:17:21.041 "cntlid": 145, 00:17:21.041 "qid": 0, 00:17:21.041 "state": "enabled", 00:17:21.041 "listen_address": { 00:17:21.041 "trtype": "RDMA", 00:17:21.041 "adrfam": "IPv4", 00:17:21.041 "traddr": "192.168.100.8", 00:17:21.041 "trsvcid": "4420" 00:17:21.041 }, 00:17:21.041 "peer_address": { 00:17:21.041 "trtype": "RDMA", 00:17:21.041 "adrfam": "IPv4", 00:17:21.041 "traddr": "192.168.100.8", 00:17:21.041 "trsvcid": "50623" 00:17:21.041 }, 00:17:21.041 "auth": { 00:17:21.041 "state": "completed", 00:17:21.041 "digest": "sha512", 00:17:21.041 "dhgroup": "ffdhe8192" 00:17:21.041 } 00:17:21.041 } 00:17:21.041 ]' 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.041 13:00:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.300 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWExNTk2ZDY3MjdkYWY3YTMzYzM3MTRhYWVlNzdlMjBlMTU3ZDRjMjE2ZWE5NzNlD6ZeJA==: 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.869 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.870 13:00:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:54.075 request: 00:17:54.075 { 00:17:54.075 "name": "nvme0", 00:17:54.075 "trtype": "rdma", 00:17:54.075 "traddr": "192.168.100.8", 00:17:54.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:17:54.075 "adrfam": "ipv4", 00:17:54.075 "trsvcid": "4420", 00:17:54.075 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:54.075 "dhchap_key": "key2", 00:17:54.075 "method": "bdev_nvme_attach_controller", 00:17:54.075 "req_id": 1 00:17:54.075 } 00:17:54.075 Got JSON-RPC error response 00:17:54.075 response: 00:17:54.075 { 00:17:54.075 "code": -32602, 00:17:54.075 "message": "Invalid parameters" 00:17:54.075 } 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3611971 ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3611971' 00:17:54.075 killing process with pid 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3611971 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:54.075 rmmod nvme_rdma 00:17:54.075 rmmod nvme_fabrics 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3611780 ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3611780 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3611780 ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3611780 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3611780 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3611780' 00:17:54.075 killing process with pid 3611780 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3611780 00:17:54.075 13:01:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3611780 00:17:54.075 13:01:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.075 13:01:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:54.075 13:01:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4co /tmp/spdk.key-sha256.zws /tmp/spdk.key-sha384.z9C /tmp/spdk.key-sha512.eXv /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:17:54.075 00:17:54.075 real 2m46.934s 00:17:54.075 user 6m11.542s 00:17:54.075 sys 0m22.262s 00:17:54.075 13:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:54.076 13:01:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 ************************************ 00:17:54.076 END TEST nvmf_auth_target 00:17:54.076 ************************************ 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:17:54.076 13:01:31 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:17:54.076 13:01:31 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:54.076 13:01:31 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:54.076 13:01:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:54.076 ************************************ 00:17:54.076 START TEST nvmf_device_removal 00:17:54.076 ************************************ 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1121 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:17:54.076 * Looking for test storage... 00:17:54.076 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:54.076 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:54.077 #define SPDK_CONFIG_H 00:17:54.077 #define SPDK_CONFIG_APPS 1 00:17:54.077 #define SPDK_CONFIG_ARCH native 00:17:54.077 #undef SPDK_CONFIG_ASAN 00:17:54.077 #undef SPDK_CONFIG_AVAHI 00:17:54.077 #undef SPDK_CONFIG_CET 00:17:54.077 #define SPDK_CONFIG_COVERAGE 1 00:17:54.077 #define SPDK_CONFIG_CROSS_PREFIX 00:17:54.077 #undef SPDK_CONFIG_CRYPTO 00:17:54.077 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:54.077 #undef SPDK_CONFIG_CUSTOMOCF 00:17:54.077 #undef SPDK_CONFIG_DAOS 00:17:54.077 #define SPDK_CONFIG_DAOS_DIR 00:17:54.077 #define SPDK_CONFIG_DEBUG 1 00:17:54.077 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:54.077 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:17:54.077 #define SPDK_CONFIG_DPDK_INC_DIR 00:17:54.077 #define SPDK_CONFIG_DPDK_LIB_DIR 00:17:54.077 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:54.077 #undef SPDK_CONFIG_DPDK_UADK 00:17:54.077 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:17:54.077 #define SPDK_CONFIG_EXAMPLES 1 00:17:54.077 #undef SPDK_CONFIG_FC 00:17:54.077 #define SPDK_CONFIG_FC_PATH 00:17:54.077 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:54.077 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:54.077 #undef SPDK_CONFIG_FUSE 00:17:54.077 #undef SPDK_CONFIG_FUZZER 00:17:54.077 #define SPDK_CONFIG_FUZZER_LIB 00:17:54.077 #undef SPDK_CONFIG_GOLANG 00:17:54.077 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:17:54.077 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:17:54.077 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:54.077 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:17:54.077 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:54.077 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:54.077 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:17:54.077 #define SPDK_CONFIG_IDXD 1 00:17:54.077 #undef SPDK_CONFIG_IDXD_KERNEL 00:17:54.077 #undef SPDK_CONFIG_IPSEC_MB 00:17:54.077 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:54.077 #define SPDK_CONFIG_ISAL 1 00:17:54.077 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:17:54.077 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:17:54.077 #define SPDK_CONFIG_LIBDIR 00:17:54.077 #undef SPDK_CONFIG_LTO 00:17:54.077 #define SPDK_CONFIG_MAX_LCORES 00:17:54.077 #define SPDK_CONFIG_NVME_CUSE 1 00:17:54.077 #undef SPDK_CONFIG_OCF 00:17:54.077 #define SPDK_CONFIG_OCF_PATH 00:17:54.077 #define SPDK_CONFIG_OPENSSL_PATH 00:17:54.077 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:54.077 #define SPDK_CONFIG_PGO_DIR 00:17:54.077 #undef SPDK_CONFIG_PGO_USE 00:17:54.077 #define SPDK_CONFIG_PREFIX /usr/local 00:17:54.077 #undef SPDK_CONFIG_RAID5F 00:17:54.077 #undef SPDK_CONFIG_RBD 00:17:54.077 #define SPDK_CONFIG_RDMA 1 00:17:54.077 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:54.077 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:54.077 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:17:54.077 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:54.077 #define SPDK_CONFIG_SHARED 1 00:17:54.077 #undef SPDK_CONFIG_SMA 00:17:54.077 #define SPDK_CONFIG_TESTS 1 00:17:54.077 #undef SPDK_CONFIG_TSAN 00:17:54.077 #define SPDK_CONFIG_UBLK 1 00:17:54.077 #define SPDK_CONFIG_UBSAN 1 00:17:54.077 #undef SPDK_CONFIG_UNIT_TESTS 00:17:54.077 #undef SPDK_CONFIG_URING 00:17:54.077 #define SPDK_CONFIG_URING_PATH 00:17:54.077 #undef SPDK_CONFIG_URING_ZNS 00:17:54.077 #undef SPDK_CONFIG_USDT 00:17:54.077 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:54.077 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:54.077 #undef SPDK_CONFIG_VFIO_USER 00:17:54.077 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:54.077 #define SPDK_CONFIG_VHOST 1 00:17:54.077 #define SPDK_CONFIG_VIRTIO 1 00:17:54.077 #undef SPDK_CONFIG_VTUNE 00:17:54.077 #define SPDK_CONFIG_VTUNE_DIR 00:17:54.077 #define SPDK_CONFIG_WERROR 1 00:17:54.077 #define SPDK_CONFIG_WPDK_DIR 00:17:54.077 #undef SPDK_CONFIG_XNVME 00:17:54.077 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:17:54.077 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@57 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@61 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # : 1 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # : 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # : 1 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # : 1 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # : rdma 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # : 1 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # : 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # : 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # : true 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # : 0 00:17:54.078 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # : mlx5 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@166 -- # : 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # : 0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # cat 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # export valgrind= 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # valgrind= 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # uname -s 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@278 -- # MAKE=make 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j72 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # TEST_MODE= 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # for i in "$@" 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # case "$i" in 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # [[ -z 3634487 ]] 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@317 -- # kill -0 3634487 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:17:54.079 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local mount target_dir 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.m68J2a 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.m68J2a/tests/target /tmp/spdk.m68J2a 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # df -T 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=966955008 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4317474816 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=84876128256 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=94508605440 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=9632477184 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=47244177408 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=10125312 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=18878906368 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=18901721088 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=22814720 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=47253860352 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=47254302720 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=442368 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # avails["$mount"]=9450856448 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # sizes["$mount"]=9450860544 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:17:54.080 * Looking for test storage... 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@367 -- # local target_space new_size 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@371 -- # mount=/ 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@373 -- # target_space=84876128256 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # new_size=11847069696 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # return 0 00:17:54.080 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1678 -- # set -o errtrace 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # true 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # xtrace_fd 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:17:54.081 13:01:31 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:00.654 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:00.654 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:00.654 Found net devices under 0000:18:00.0: mlx_0_0 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:00.654 Found net devices under 0000:18:00.1: mlx_0_1 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:00.654 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:00.655 32: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.655 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:18:00.655 altname enp24s0f0np0 00:18:00.655 altname ens785f0np0 00:18:00.655 inet 192.168.100.8/24 scope global mlx_0_0 00:18:00.655 valid_lft forever preferred_lft forever 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:00.655 33: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:00.655 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:18:00.655 altname enp24s0f1np1 00:18:00.655 altname ens785f1np1 00:18:00.655 inet 192.168.100.9/24 scope global mlx_0_1 00:18:00.655 valid_lft forever preferred_lft forever 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:00.655 192.168.100.9' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:00.655 192.168.100.9' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:00.655 192.168.100.9' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:18:00.655 ************************************ 00:18:00.655 START TEST nvmf_device_removal_pci_remove_no_srq 00:18:00.655 ************************************ 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1121 -- # test_remove_and_rescan --no-srq 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=3637377 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 3637377 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 3637377 ']' 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.655 13:01:37 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:00.655 [2024-05-15 13:01:37.623082] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:18:00.655 [2024-05-15 13:01:37.623135] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.655 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.655 [2024-05-15 13:01:37.693596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:00.655 [2024-05-15 13:01:37.780519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.655 [2024-05-15 13:01:37.780556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.655 [2024-05-15 13:01:37.780565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.655 [2024-05-15 13:01:37.780573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.655 [2024-05-15 13:01:37.780580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.655 [2024-05-15 13:01:37.780664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.655 [2024-05-15 13:01:37.780666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.655 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.655 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:18:00.655 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.655 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.655 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.656 [2024-05-15 13:01:38.502397] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cf2a90/0x1cf6f80) succeed. 00:18:00.656 [2024-05-15 13:01:38.511677] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cf3f90/0x1d38610) succeed. 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:00.656 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:00.915 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:00.915 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.915 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.915 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 [2024-05-15 13:01:38.653140] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:00.916 [2024-05-15 13:01:38.653520] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:00.916 [2024-05-15 13:01:38.741580] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=3637539 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.916 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 3637539 /var/tmp/bdevperf.sock 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@827 -- # '[' -z 3637539 ']' 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:00.917 13:01:38 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@860 -- # return 0 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:01.854 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:01.855 Nvme_mlx_0_0n1 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.855 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:02.115 Nvme_mlx_0_1n1 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3637620 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:18:02.115 13:01:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.404 mlx5_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:07.404 13:01:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:18:07.404 [2024-05-15 13:01:44.979133] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:18:07.404 [2024-05-15 13:01:44.979810] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:18:07.404 [2024-05-15 13:01:44.981634] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:07.404 [2024-05-15 13:01:44.981658] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:18:07.404 [2024-05-15 13:01:44.981667] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:18:07.404 [2024-05-15 13:01:44.981675] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981683] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981690] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.404 [2024-05-15 13:01:44.981698] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.404 [2024-05-15 13:01:44.981705] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981712] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981719] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.404 [2024-05-15 13:01:44.981726] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.404 [2024-05-15 13:01:44.981734] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981741] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981748] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.404 [2024-05-15 13:01:44.981756] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.404 [2024-05-15 13:01:44.981763] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981777] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981784] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981791] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981799] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981813] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981820] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981827] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.404 [2024-05-15 13:01:44.981834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981841] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981848] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981855] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981862] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981869] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.404 [2024-05-15 13:01:44.981878] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.404 [2024-05-15 13:01:44.981885] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981892] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981899] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981907] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.404 [2024-05-15 13:01:44.981923] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.404 [2024-05-15 13:01:44.981931] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.981938] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.981945] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.981953] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.981960] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.981967] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.981974] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.981981] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.981988] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.981995] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982002] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982010] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982017] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982024] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982031] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982039] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982053] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982066] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982073] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982080] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982089] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982096] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982103] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982110] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982118] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982125] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982132] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982139] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982147] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982155] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982164] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982172] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982180] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982189] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982196] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982204] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982211] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982219] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982227] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982235] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982243] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982251] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982259] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982267] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982274] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982282] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982290] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982298] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982305] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982313] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982322] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982330] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982337] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982345] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982353] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982360] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982368] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982376] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982384] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982392] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982408] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982421] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982429] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982437] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982444] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982452] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982458] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982466] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982474] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982482] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982491] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982499] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982507] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982515] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982522] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982530] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982538] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982546] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982554] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982562] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982570] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982577] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982585] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982593] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982601] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982610] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982617] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982648] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982655] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982664] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982671] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982680] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982689] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982696] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982703] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982710] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982717] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982724] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982731] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982739] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982748] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982761] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982770] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982778] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982786] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982795] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982804] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.405 [2024-05-15 13:01:44.982812] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.405 [2024-05-15 13:01:44.982820] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.405 [2024-05-15 13:01:44.982828] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.405 [2024-05-15 13:01:44.982836] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.982844] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.982852] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.982860] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982869] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982877] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982885] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982893] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982901] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982909] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.982917] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.982925] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.982932] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.982942] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982950] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982958] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982966] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982973] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982980] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.982987] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.982994] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.983001] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983008] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983015] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.983022] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.983029] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.983037] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.983044] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983052] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983063] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983070] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983077] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983084] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983093] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.983100] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.983108] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983115] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983122] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.983129] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:07.406 [2024-05-15 13:01:44.983136] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983143] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983150] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:07.406 [2024-05-15 13:01:44.983158] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:07.406 [2024-05-15 13:01:44.983166] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:07.406 [2024-05-15 13:01:44.983173] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:12.676 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:18:12.935 13:01:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:18:13.871 [2024-05-15 13:01:51.508856] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x1cf3bf0, err 11. Skip rescan. 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:18:13.871 13:01:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:18:14.129 [2024-05-15 13:01:51.871928] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cf2a90/0x1cf6f80) succeed. 00:18:14.129 [2024-05-15 13:01:51.871994] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:17.420 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:17.421 [2024-05-15 13:01:54.908803] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:17.421 [2024-05-15 13:01:54.908836] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:18:17.421 [2024-05-15 13:01:54.908852] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:17.421 [2024-05-15 13:01:54.908870] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:17.421 13:01:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.421 mlx5_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:17.421 13:01:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:18:17.421 [2024-05-15 13:01:55.084172] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:18:17.421 [2024-05-15 13:01:55.084245] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:17.421 [2024-05-15 13:01:55.089752] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:17.421 [2024-05-15 13:01:55.089768] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 128 00:18:17.421 [2024-05-15 13:01:55.089776] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:18:17.421 [2024-05-15 13:01:55.089784] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089791] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089799] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089813] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089820] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089827] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089843] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089850] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089858] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089865] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089872] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089879] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089886] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089893] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089900] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089907] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089914] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089921] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089928] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089939] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089946] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089953] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089960] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089967] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089975] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089981] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.089989] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.089996] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090003] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090010] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090018] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090025] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090033] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090040] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090048] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090060] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090067] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090074] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090081] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090088] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090095] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090102] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090109] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090116] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090123] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.421 [2024-05-15 13:01:55.090130] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.421 [2024-05-15 13:01:55.090137] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090144] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090151] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090158] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090165] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090172] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090179] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090186] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090193] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090200] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090214] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090221] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090228] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090236] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090245] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090253] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090260] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090267] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090274] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090281] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090288] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090295] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090302] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:17.422 [2024-05-15 13:01:55.090309] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090316] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090323] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090330] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090337] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090344] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090351] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090358] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090366] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090373] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090380] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090387] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090394] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090401] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090408] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090415] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090423] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090430] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090437] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090443] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090450] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090457] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090464] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090471] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090479] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090486] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090493] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090501] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090508] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090515] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090522] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090529] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090536] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090545] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090553] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090560] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090567] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090574] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090581] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090589] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090596] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090603] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090610] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090617] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090624] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090630] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090638] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090645] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090652] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090659] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090666] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090672] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090680] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090687] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090694] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090702] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090709] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090716] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090724] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090731] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090738] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090745] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090759] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090766] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090773] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090780] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090787] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090794] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090801] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090808] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090815] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090822] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090829] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090836] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090845] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090852] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090859] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090866] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090873] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090880] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090887] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090894] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090901] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090908] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090915] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090922] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090929] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090939] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090946] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090954] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090962] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090970] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.422 [2024-05-15 13:01:55.090979] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.422 [2024-05-15 13:01:55.090989] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.090997] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091005] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091011] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091019] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091026] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091033] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091040] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091047] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091054] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091065] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091072] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091080] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091087] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091094] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091101] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091108] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091115] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091122] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091130] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091137] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091144] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091151] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091160] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091167] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:17.423 [2024-05-15 13:01:55.091174] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:17.423 [2024-05-15 13:01:55.091183] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091190] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091198] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091205] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091212] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091219] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091226] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091234] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091241] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091248] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091255] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091263] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091270] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091277] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091284] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091291] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091298] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091305] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091312] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091319] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091327] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091334] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091342] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091349] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091356] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091363] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091370] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091378] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091385] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091392] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091399] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091406] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091414] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091421] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091429] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091436] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091444] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091451] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091459] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091466] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091473] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091480] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091488] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091495] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091503] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091509] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091517] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091523] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091530] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091538] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091545] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091552] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091559] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091567] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091574] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091581] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091587] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091595] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091603] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091610] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091617] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:18:17.423 [2024-05-15 13:01:55.091624] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:18:17.423 [2024-05-15 13:01:55.091631] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:18:17.423 [2024-05-15 13:01:55.091638] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:18:23.991 13:02:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:18:23.991 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:18:23.992 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:18:23.992 13:02:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:18:24.258 [2024-05-15 13:02:01.993479] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1efc390/0x1d38610) succeed. 00:18:24.258 [2024-05-15 13:02:01.993569] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:18:27.545 [2024-05-15 13:02:05.089888] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:27.545 [2024-05-15 13:02:05.089925] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:18:27.545 [2024-05-15 13:02:05.089943] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:27.545 [2024-05-15 13:02:05.089960] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:18:27.545 13:02:05 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 3637620 00:19:35.366 0 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 3637539 ']' 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637539' 00:19:35.366 killing process with pid 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 3637539 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:19:35.366 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:19:35.366 [2024-05-15 13:01:38.799014] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:19:35.366 [2024-05-15 13:01:38.799076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637539 ] 00:19:35.366 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.366 [2024-05-15 13:01:38.866534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.366 [2024-05-15 13:01:38.948975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.366 Running I/O for 90 seconds... 00:19:35.366 [2024-05-15 13:01:44.986172] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:35.366 [2024-05-15 13:01:44.986204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.366 [2024-05-15 13:01:44.986217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.366 [2024-05-15 13:01:44.986228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.366 [2024-05-15 13:01:44.986238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.366 [2024-05-15 13:01:44.986248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.366 [2024-05-15 13:01:44.986257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.366 [2024-05-15 13:01:44.986267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.366 [2024-05-15 13:01:44.986276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.366 [2024-05-15 13:01:44.990551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.366 [2024-05-15 13:01:44.990576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.366 [2024-05-15 13:01:44.990625] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:35.366 [2024-05-15 13:01:44.996171] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.006500] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.016872] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.026900] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.036926] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.046951] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.056978] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.067089] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.077411] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.087680] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.098011] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.108039] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.118064] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.128275] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.138605] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.148913] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.159232] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.169261] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.179288] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.189314] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.199340] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.209366] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.219392] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.229419] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.239444] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.249472] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.259498] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.366 [2024-05-15 13:01:45.269523] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.279552] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.289579] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.299641] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.309950] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.320299] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.330597] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.340850] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.351136] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.361471] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.371789] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.382065] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.392731] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.402747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.412774] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.422800] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.433067] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.443338] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.453582] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.463995] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.474344] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.484632] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.495105] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.505328] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.515667] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.526073] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.536563] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.546781] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.557031] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.567326] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.577805] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.587998] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.598415] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.608869] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.619051] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.629421] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.639737] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.650113] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.660347] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.670608] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.680861] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.691225] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.701521] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.711855] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.722210] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.732473] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.742676] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.753019] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.763298] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.773529] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.783776] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.794031] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.804223] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.814459] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.824748] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.834942] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.845287] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.855458] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.865702] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.876077] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.886367] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.896731] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.906949] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.917088] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.927412] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.937723] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.948030] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.958418] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.968697] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.978973] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.989183] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.367 [2024-05-15 13:01:45.993291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:202072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:202080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:202088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:202096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:202104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:202112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:202120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:202128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x181700 00:19:35.367 [2024-05-15 13:01:45.993469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.367 [2024-05-15 13:01:45.993480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:202136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:202144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:202152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:202160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:202168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:202176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:202184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:202192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:202200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:202208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:202216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:202224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:202232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:202240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:202248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:202256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:202264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:202272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:202280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:202288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:202296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:202304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:202312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:202320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.993980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:202328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.993990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:202336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:202344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:202352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:202360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:202368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:202376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:202384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:202392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:202400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:202408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:202416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:202424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x181700 00:19:35.368 [2024-05-15 13:01:45.994242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.368 [2024-05-15 13:01:45.994254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:202432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:202440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:202448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:202456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:202464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:202472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:202480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:202488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:202496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:202504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:202512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:202520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:202528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:202536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:202544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:202552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:202560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:202568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:202576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:202584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:202592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:202600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:202608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:202616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:202624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:202632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:202640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:202648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:202656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:202664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:202672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:202680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:202688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:202696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:202704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.994978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.994992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:202712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.995002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.369 [2024-05-15 13:01:45.995013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:202720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x181700 00:19:35.369 [2024-05-15 13:01:45.995023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:202728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x181700 00:19:35.370 [2024-05-15 13:01:45.995044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:202736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x181700 00:19:35.370 [2024-05-15 13:01:45.995068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:202744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x181700 00:19:35.370 [2024-05-15 13:01:45.995089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:202752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:202760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:202768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:202776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:202784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:202792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:202800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:202808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:202816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:202824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:202832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:202840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:202848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:202856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:202864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:202872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:202880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:202888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:202896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:202904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:202912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:202920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:202928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:202936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:202944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:202952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.370 [2024-05-15 13:01:45.995633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:202960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.370 [2024-05-15 13:01:45.995642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:202968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:202976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:202984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:202992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:203000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:203008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:203016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:203024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:203032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:203040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:203048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:203056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:203064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:203072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:45.995942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:203080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.371 [2024-05-15 13:01:45.995951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:46.008990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.371 [2024-05-15 13:01:46.009005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.371 [2024-05-15 13:01:46.009017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:203088 len:8 PRP1 0x0 PRP2 0x0 00:19:35.371 [2024-05-15 13:01:46.009027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.371 [2024-05-15 13:01:46.010124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:46.010394] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.371 [2024-05-15 13:01:46.010409] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:46.010417] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:46.010435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.371 [2024-05-15 13:01:46.010445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.371 [2024-05-15 13:01:46.010457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.371 [2024-05-15 13:01:46.010466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.371 [2024-05-15 13:01:46.010476] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.371 [2024-05-15 13:01:46.010496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.371 [2024-05-15 13:01:46.010506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:47.013317] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.371 [2024-05-15 13:01:47.013358] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:47.013369] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:47.013394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.371 [2024-05-15 13:01:47.013406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.371 [2024-05-15 13:01:47.013421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.371 [2024-05-15 13:01:47.013433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.371 [2024-05-15 13:01:47.013446] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.371 [2024-05-15 13:01:47.013476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.371 [2024-05-15 13:01:47.013488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:48.017826] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.371 [2024-05-15 13:01:48.017865] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:48.017874] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:48.017895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.371 [2024-05-15 13:01:48.017906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.371 [2024-05-15 13:01:48.017952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.371 [2024-05-15 13:01:48.017968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.371 [2024-05-15 13:01:48.017979] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.371 [2024-05-15 13:01:48.018006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.371 [2024-05-15 13:01:48.018016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:50.023706] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:50.023748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:50.023774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.371 [2024-05-15 13:01:50.023785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.371 [2024-05-15 13:01:50.024722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.371 [2024-05-15 13:01:50.024739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.371 [2024-05-15 13:01:50.024750] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.371 [2024-05-15 13:01:50.024775] bdev_nvme.c:2873:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:19:35.371 [2024-05-15 13:01:50.025590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.371 [2024-05-15 13:01:50.026380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:51.029071] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:51.029101] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:51.029128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.371 [2024-05-15 13:01:51.029139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.371 [2024-05-15 13:01:51.029153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.371 [2024-05-15 13:01:51.029163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.371 [2024-05-15 13:01:51.029174] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.371 [2024-05-15 13:01:51.029205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.371 [2024-05-15 13:01:51.029214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.371 [2024-05-15 13:01:53.034134] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.371 [2024-05-15 13:01:53.034163] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.371 [2024-05-15 13:01:53.034187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.372 [2024-05-15 13:01:53.034197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.372 [2024-05-15 13:01:53.034210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.372 [2024-05-15 13:01:53.034219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.372 [2024-05-15 13:01:53.034234] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.372 [2024-05-15 13:01:53.034260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.372 [2024-05-15 13:01:53.034269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.372 [2024-05-15 13:01:55.039233] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.372 [2024-05-15 13:01:55.039262] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:35.372 [2024-05-15 13:01:55.039292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.372 [2024-05-15 13:01:55.039304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:35.372 [2024-05-15 13:01:55.039318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:35.372 [2024-05-15 13:01:55.039328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:35.372 [2024-05-15 13:01:55.039339] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:35.372 [2024-05-15 13:01:55.040996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.372 [2024-05-15 13:01:55.041016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:35.372 [2024-05-15 13:01:55.087605] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:35.372 [2024-05-15 13:01:55.087631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.372 [2024-05-15 13:01:55.087642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.372 [2024-05-15 13:01:55.087653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.372 [2024-05-15 13:01:55.087662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.372 [2024-05-15 13:01:55.087672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.372 [2024-05-15 13:01:55.087682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.372 [2024-05-15 13:01:55.087692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.372 [2024-05-15 13:01:55.087701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32673 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:19:35.372 [2024-05-15 13:01:55.095435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.372 [2024-05-15 13:01:55.095456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.372 [2024-05-15 13:01:55.095483] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:35.372 [2024-05-15 13:01:55.097612] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.107637] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.117661] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.127687] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.137715] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.147742] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.157769] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.167796] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.177821] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.187846] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.197872] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.207897] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.217921] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.227948] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.237973] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.248000] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.258027] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.268052] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.278079] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.288104] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.298130] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.308157] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.318182] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.328208] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.338234] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.348261] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.358289] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.368314] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.378342] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.388368] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.398394] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.408421] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.418449] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.428474] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.438500] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.448528] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.458554] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.468580] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.478606] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.488632] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.498658] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.508686] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.518712] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.528739] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.538763] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.548789] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.558816] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.568846] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.578872] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.588903] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.598931] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.608960] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.618987] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.629016] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.639041] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.649073] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.659095] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.669122] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.679148] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.689175] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.699200] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.709227] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.719252] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.372 [2024-05-15 13:01:55.729278] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.739304] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.749332] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.759357] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.769384] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.779409] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.789434] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.799460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.809487] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.819511] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.829537] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.839563] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.849587] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.859612] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.869640] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.879667] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.889693] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.899717] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.909745] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.919772] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.929797] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.939824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.949851] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.959876] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.969904] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.979930] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.989955] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:55.999980] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.010006] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.020033] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.030063] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.040091] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.050120] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.070393] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.080390] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.085014] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.373 [2024-05-15 13:01:56.090417] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:35.373 [2024-05-15 13:01:56.097934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.097947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.097962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.097972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.097983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.097992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf700 00:19:35.373 [2024-05-15 13:01:56.098385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.373 [2024-05-15 13:01:56.098395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.098986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.098995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.374 [2024-05-15 13:01:56.099154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf700 00:19:35.374 [2024-05-15 13:01:56.099163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.375 [2024-05-15 13:01:56.099789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf700 00:19:35.375 [2024-05-15 13:01:56.099798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.099981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.100466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf700 00:19:35.376 [2024-05-15 13:01:56.100475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.109797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.376 [2024-05-15 13:01:56.109809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.109821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.376 [2024-05-15 13:01:56.109830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.109841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.376 [2024-05-15 13:01:56.109850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:98e4fcb0 sqhd:3530 p:0 m:0 dnr:0 00:19:35.376 [2024-05-15 13:01:56.122840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.376 [2024-05-15 13:01:56.122854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.376 [2024-05-15 13:01:56.122863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20504 len:8 PRP1 0x0 PRP2 0x0 00:19:35.376 [2024-05-15 13:01:56.122873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.377 [2024-05-15 13:01:56.122919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:01:56.123146] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.377 [2024-05-15 13:01:56.123159] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:01:56.123167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:01:56.123183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:01:56.123192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:01:56.123204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:01:56.123213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:01:56.123222] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:01:56.123240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:01:56.123249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:01:57.125766] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.377 [2024-05-15 13:01:57.125793] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:01:57.125802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:01:57.125819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:01:57.125829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:01:57.125841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:01:57.125850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:01:57.125860] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:01:57.125880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:01:57.125890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:01:58.128395] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:35.377 [2024-05-15 13:01:58.128435] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:01:58.128444] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:01:58.128465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:01:58.128475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:01:58.128487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:01:58.128497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:01:58.128508] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:01:58.128535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:01:58.128545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:02:00.135037] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:02:00.135083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:02:00.135111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:02:00.135121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:02:00.136714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:02:00.136731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:02:00.136742] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:02:00.137221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:02:00.137235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:02:02.144638] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:02:02.144672] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:02:02.144715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:02:02.144726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:02:02.144752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:02:02.144762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:02:02.144772] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:02:02.144812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:02:02.144823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:02:04.149762] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:35.377 [2024-05-15 13:02:04.149797] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:19:35.377 [2024-05-15 13:02:04.149824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:35.377 [2024-05-15 13:02:04.149835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:35.377 [2024-05-15 13:02:04.150243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:35.377 [2024-05-15 13:02:04.150255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:35.377 [2024-05-15 13:02:04.150265] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:35.377 [2024-05-15 13:02:04.150292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.377 [2024-05-15 13:02:04.150302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:35.377 [2024-05-15 13:02:05.214451] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.377 00:19:35.377 Latency(us) 00:19:35.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.377 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.377 Verification LBA range: start 0x0 length 0x8000 00:19:35.377 Nvme_mlx_0_0n1 : 90.01 10721.16 41.88 0.00 0.00 11919.98 2151.29 12079595.52 00:19:35.377 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.377 Verification LBA range: start 0x0 length 0x8000 00:19:35.377 Nvme_mlx_0_1n1 : 90.01 9621.98 37.59 0.00 0.00 13282.09 2478.97 11087551.44 00:19:35.377 =================================================================================================================== 00:19:35.377 Total : 20343.14 79.47 0.00 0.00 12564.23 2151.29 12079595.52 00:19:35.377 Received shutdown signal, test time was about 90.000000 seconds 00:19:35.377 00:19:35.377 Latency(us) 00:19:35.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.377 =================================================================================================================== 00:19:35.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 3637377 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@946 -- # '[' -z 3637377 ']' 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@950 -- # kill -0 3637377 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # uname 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637377 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637377' 00:19:35.377 killing process with pid 3637377 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # kill 3637377 00:19:35.377 [2024-05-15 13:03:10.545937] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@970 -- # wait 3637377 00:19:35.377 [2024-05-15 13:03:10.574456] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:19:35.377 00:19:35.377 real 1m33.327s 00:19:35.377 user 4m25.570s 00:19:35.377 sys 0m4.503s 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.377 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:19:35.377 ************************************ 00:19:35.377 END TEST nvmf_device_removal_pci_remove_no_srq 00:19:35.377 ************************************ 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 ************************************ 00:19:35.378 START TEST nvmf_device_removal_pci_remove 00:19:35.378 ************************************ 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1121 -- # test_remove_and_rescan 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=3650139 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 3650139 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 3650139 ']' 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.378 13:03:10 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 [2024-05-15 13:03:11.040519] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:19:35.378 [2024-05-15 13:03:11.040572] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.378 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.378 [2024-05-15 13:03:11.112871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.378 [2024-05-15 13:03:11.203905] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.378 [2024-05-15 13:03:11.203947] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.378 [2024-05-15 13:03:11.203957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.378 [2024-05-15 13:03:11.203965] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.378 [2024-05-15 13:03:11.203972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.378 [2024-05-15 13:03:11.204025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.378 [2024-05-15 13:03:11.204028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.378 13:03:11 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 [2024-05-15 13:03:11.945110] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd46a90/0xd4af80) succeed. 00:19:35.378 [2024-05-15 13:03:11.954442] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd47f90/0xd8c610) succeed. 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.378 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.378 [2024-05-15 13:03:12.156290] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:35.379 [2024-05-15 13:03:12.156699] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 [2024-05-15 13:03:12.244482] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=3650358 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 3650358 /var/tmp/bdevperf.sock 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@827 -- # '[' -z 3650358 ']' 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.379 13:03:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@860 -- # return 0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.379 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.653 Nvme_mlx_0_0n1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:35.653 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:35.654 Nvme_mlx_0_1n1 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=3650551 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:19:35.654 13:03:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:19:40.935 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.936 mlx5_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:19:40.936 13:03:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:19:40.936 [2024-05-15 13:03:18.523010] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:19:40.936 [2024-05-15 13:03:18.523109] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:40.936 [2024-05-15 13:03:18.525984] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:40.936 [2024-05-15 13:03:18.526005] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 64 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:19:47.524 13:03:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:19:47.524 [2024-05-15 13:03:25.339735] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xe24600, err 11. Skip rescan. 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:19:47.783 13:03:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:19:48.041 [2024-05-15 13:03:25.720070] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd499b0/0xd4af80) succeed. 00:19:48.041 [2024-05-15 13:03:25.720133] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 [2024-05-15 13:03:28.741769] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:51.329 [2024-05-15 13:03:28.741798] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:19:51.329 [2024-05-15 13:03:28.741814] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:51.329 [2024-05-15 13:03:28.741829] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:19:51.329 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.330 mlx5_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:19:51.330 13:03:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:19:51.330 [2024-05-15 13:03:28.926358] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:19:51.330 [2024-05-15 13:03:28.926432] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:51.330 [2024-05-15 13:03:28.928872] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:51.330 [2024-05-15 13:03:28.928891] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:19:56.602 Connection closed with partial response: 00:19:56.602 00:19:56.602 00:19:57.169 test/nvmf/target/device_removal.sh: line 148: 3650358 Segmentation fault (core dumped) $rootdir/build/examples/bdevperf -m $bdevperf_core_mask -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 90 &> $testdir/try.txt 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:19:57.169 13:03:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:19:58.106 [2024-05-15 13:03:35.843527] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xd4a0d0, err 11. Skip rescan. 00:19:58.106 [2024-05-15 13:03:35.848957] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xd4a0d0, err 11. Skip rescan. 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:19:58.106 13:03:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:19:58.675 [2024-05-15 13:03:36.248384] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd49da0/0xd8c610) succeed. 00:19:58.675 [2024-05-15 13:03:36.248447] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:01.972 [2024-05-15 13:03:39.277176] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:01.972 [2024-05-15 13:03:39.277212] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:20:01.972 [2024-05-15 13:03:39.277232] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:01.972 [2024-05-15 13:03:39.277249] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 3650551 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # trap - ERR 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # print_backtrace 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1149 -- # [[ ehxBET =~ e ]] 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1151 -- # args=('test_remove_and_rescan' 'nvmf_device_removal_pci_remove' '--transport=rdma') 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1151 -- # local args 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1153 -- # xtrace_disable 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:20:01.972 ========== Backtrace start: ========== 00:20:01.972 00:20:01.972 in test/nvmf/target/device_removal.sh:116 -> stop_bdevperf([]) 00:20:01.972 ... 00:20:01.972 111 00:20:01.972 112 sleep 5 00:20:01.972 113 } 00:20:01.972 114 00:20:01.972 115 function stop_bdevperf() { 00:20:01.972 => 116 wait $bdevperf_rpc_pid 00:20:01.972 117 00:20:01.972 118 killprocess $bdevperf_pid 00:20:01.972 119 bdevperf_pid= 00:20:01.972 120 00:20:01.972 121 cat $testdir/try.txt 00:20:01.972 ... 00:20:01.972 in test/nvmf/target/device_removal.sh:200 -> test_remove_and_rescan([]) 00:20:01.972 ... 00:20:01.972 195 fi 00:20:01.972 196 sleep 2 00:20:01.972 197 done 00:20:01.972 198 done 00:20:01.972 199 00:20:01.972 => 200 stop_bdevperf 00:20:01.972 201 00:20:01.972 202 killprocess $nvmfpid 00:20:01.972 203 nvmfpid= 00:20:01.972 204 00:20:01.972 205 return 0 00:20:01.972 ... 00:20:01.972 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh:1121 -> run_test(["nvmf_device_removal_pci_remove"],["test_remove_and_rescan"]) 00:20:01.972 ... 00:20:01.972 1116 timing_enter $test_name 00:20:01.972 1117 echo "************************************" 00:20:01.972 1118 echo "START TEST $test_name" 00:20:01.972 1119 echo "************************************" 00:20:01.972 1120 xtrace_restore 00:20:01.972 1121 time "$@" 00:20:01.972 1122 xtrace_disable 00:20:01.972 1123 echo "************************************" 00:20:01.972 1124 echo "END TEST $test_name" 00:20:01.972 1125 echo "************************************" 00:20:01.972 1126 timing_exit $test_name 00:20:01.972 ... 00:20:01.972 in test/nvmf/target/device_removal.sh:312 -> main(["--transport=rdma"]) 00:20:01.972 ... 00:20:01.972 307 fi 00:20:01.972 308 test_bonding_slaves_on_nics "${target_nics[@]}" 00:20:01.972 309 } 00:20:01.972 310 00:20:01.972 311 run_test "nvmf_device_removal_pci_remove_no_srq" test_remove_and_rescan --no-srq 00:20:01.972 => 312 run_test "nvmf_device_removal_pci_remove" test_remove_and_rescan 00:20:01.972 313 # bond slaves case needs lag_master & vport_manager are enabled by mlxconfig 00:20:01.972 314 # and not work on CI machine currently. 00:20:01.972 315 # run_test "nvmf_device_removal_bond_slaves" test_bond_slaves 00:20:01.972 316 00:20:01.972 317 nvmftestfini 00:20:01.972 ... 00:20:01.972 00:20:01.972 ========== Backtrace end ========== 00:20:01.972 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1190 -- # return 0 00:20:01.973 00:20:01.973 real 0m28.369s 00:20:01.973 user 0m18.524s 00:20:01.973 sys 0m1.906s 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@1 -- # process_shm --id 0 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@804 -- # type=--id 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@805 -- # id=0 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:01.973 nvmf_trace.0 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@819 -- # return 0 00:20:01.973 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@1 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:01.973 [2024-05-15 13:03:12.299655] Starting SPDK v24.05-pre git sha1 01137ce67 / DPDK 23.11.0 initialization... 00:20:01.973 [2024-05-15 13:03:12.299720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650358 ] 00:20:01.973 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.973 [2024-05-15 13:03:12.367315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.973 [2024-05-15 13:03:12.458517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.973 Running I/O for 90 seconds... 00:20:01.973 [2024-05-15 13:03:18.524307] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:01.973 [2024-05-15 13:03:18.524346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.973 [2024-05-15 13:03:18.524360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.973 [2024-05-15 13:03:18.524373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.973 [2024-05-15 13:03:18.524383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.973 [2024-05-15 13:03:18.524393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.973 [2024-05-15 13:03:18.524403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.973 [2024-05-15 13:03:18.524413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.973 [2024-05-15 13:03:18.524422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.973 [2024-05-15 13:03:18.528824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.973 [2024-05-15 13:03:18.528843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.973 [2024-05-15 13:03:18.528890] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:01.973 [2024-05-15 13:03:18.535082] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.545073] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.555286] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.565353] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.575519] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.585818] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.595845] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.606117] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.616369] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.626634] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.636662] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.646690] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.656716] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.666741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.676765] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.686791] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.696817] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.706843] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.716870] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.726896] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.736924] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.746949] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.757188] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.767593] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.777884] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.788219] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.798582] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.808842] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.819117] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.829460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.839747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.850039] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.860316] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.870710] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.880968] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.891281] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.901576] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.911942] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.922288] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.932540] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.942853] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.953292] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.963619] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.973876] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.984277] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:18.994672] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.005052] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.015363] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.025683] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.036076] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.046481] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.056845] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.066994] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.077283] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.087583] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.097936] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.108239] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.118764] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.129004] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.139314] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.149629] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.973 [2024-05-15 13:03:19.160037] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.170294] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.180531] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.190815] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.201089] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.211329] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.221530] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.231764] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.242148] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.252472] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.262875] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.273275] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.283747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.294163] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.304571] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.314901] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.325253] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.335600] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.346010] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.356507] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.366875] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.377167] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.387474] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.397922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.408260] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.418598] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.428962] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.439389] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.449824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.460202] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.470586] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.480924] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.491319] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.501699] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.512113] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.522611] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.974 [2024-05-15 13:03:19.531343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:191664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:191672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:191680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:191688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:191696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:191704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:191712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:191720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:191728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:191736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:191744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:191752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:191760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:191768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:191776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:191784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:191792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:191800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:191808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:191816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:191824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.974 [2024-05-15 13:03:19.531808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:191832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:20:01.974 [2024-05-15 13:03:19.531817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:191840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:191848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:191856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:191864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:191872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:191880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:191888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:191896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.531983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.531994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:191904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:191912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:191920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:191928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:191936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:191944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:191952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:191960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:191968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:191976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:191984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:191992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:192000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:192008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:192016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:192024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:192032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:192040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:192048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:192056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:192064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:192072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:192088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:192096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:192112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:20:01.975 [2024-05-15 13:03:19.532569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.975 [2024-05-15 13:03:19.532580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:192128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:192136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:192144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:192152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:192160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:192168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:192176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:192184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:192192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:192200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:192208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:192216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:192232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:192240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:192248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:192256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:192264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:192272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:192280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.532980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.532991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:192288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:192296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:192304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:192320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:192328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:192336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.976 [2024-05-15 13:03:19.533142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:192344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:20:01.976 [2024-05-15 13:03:19.533151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:192352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:192360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:192368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:192376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:192392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:192400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:192408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:192416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:192424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:192432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:192440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:192448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:192456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:192464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:192480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:192488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:192496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:192504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1810ef 00:20:01.977 [2024-05-15 13:03:19.533570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:192512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:192520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:192528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:192536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:192544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:192552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:192560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:192568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:192576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:192584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:192592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:192600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:192608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:192616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:192624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.977 [2024-05-15 13:03:19.533878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.977 [2024-05-15 13:03:19.533889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:192632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.533898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.533909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.533918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.533929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:192648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.533939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.533950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:192656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.533959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.533970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:192664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.533979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.533990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:192672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.978 [2024-05-15 13:03:19.534002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.547039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:01.978 [2024-05-15 13:03:19.547059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:01.978 [2024-05-15 13:03:19.547069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192680 len:8 PRP1 0x0 PRP2 0x0 00:20:01.978 [2024-05-15 13:03:19.547079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:19.550233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:19.550513] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:01.978 [2024-05-15 13:03:19.550529] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:19.550538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:19.550558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:19.550568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:19.550591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:19.550601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:19.550611] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:19.550633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:19.550642] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:20.553339] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:01.978 [2024-05-15 13:03:20.553382] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:20.553391] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:20.553415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:20.553425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:20.553438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:20.553447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:20.553457] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:20.553483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:20.553492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:21.557080] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:20:01.978 [2024-05-15 13:03:21.557127] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:21.557142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:21.557165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:21.557175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:21.558073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:21.558088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:21.558098] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:21.558883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:21.558897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:23.564001] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:23.564042] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:23.564073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:23.564084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:23.564099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:23.564108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:23.564118] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:23.564150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:23.564160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:25.569088] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:25.569121] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:25.569147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:25.569158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:25.569171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:25.569180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:25.569190] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:25.569214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:25.569224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:27.574149] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.978 [2024-05-15 13:03:27.574190] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.978 [2024-05-15 13:03:27.574240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:27.574252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.978 [2024-05-15 13:03:27.574269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.978 [2024-05-15 13:03:27.574278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.978 [2024-05-15 13:03:27.574288] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.978 [2024-05-15 13:03:27.574311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.978 [2024-05-15 13:03:27.574320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.978 [2024-05-15 13:03:28.921141] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:01.978 [2024-05-15 13:03:28.921178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.978 [2024-05-15 13:03:28.921190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:28.921201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.978 [2024-05-15 13:03:28.921211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:28.921221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.978 [2024-05-15 13:03:28.921231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:28.921241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:01.978 [2024-05-15 13:03:28.921252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32588 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:20:01.978 [2024-05-15 13:03:28.931919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.978 [2024-05-15 13:03:28.931953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:20:01.978 [2024-05-15 13:03:28.931996] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:20:01.979 [2024-05-15 13:03:28.932054] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.942049] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.952075] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.962101] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.972125] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.982150] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:28.992177] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.002205] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.012231] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.022259] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.032287] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.042313] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.052339] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.062365] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.072391] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.082420] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.092447] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.102476] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.112501] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.122528] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.132554] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.142582] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.152611] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.162637] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.172664] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.182689] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.192719] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.202745] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.212773] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.222801] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.232828] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.242853] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.252878] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.262908] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.272932] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.282957] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.292984] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.303012] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.313041] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.323068] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.333095] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.343125] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.353151] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.363177] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.373206] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.383232] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.393260] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.403286] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.413314] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.423339] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.433364] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.443392] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.453419] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.463448] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.473477] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.483505] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.493533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.503558] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.513585] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.523615] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.533640] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.543666] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.553691] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.563717] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.573741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.579239] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:20:01.979 [2024-05-15 13:03:29.579250] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:01.979 [2024-05-15 13:03:29.579274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.979 [2024-05-15 13:03:29.579284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:20:01.979 [2024-05-15 13:03:29.579297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:20:01.979 [2024-05-15 13:03:29.579305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:20:01.979 [2024-05-15 13:03:29.579318] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:20:01.979 [2024-05-15 13:03:29.579340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.979 [2024-05-15 13:03:29.579349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:20:01.979 [2024-05-15 13:03:29.583763] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.593788] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.603814] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.613841] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.623867] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.633892] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.643920] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.653946] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.663971] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.673996] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.684021] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.694047] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.704075] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.714099] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.724127] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.734154] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.744181] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.754207] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.764233] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.774259] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.784286] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.794312] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.804337] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.814364] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.824390] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.834417] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.979 [2024-05-15 13:03:29.844441] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.854467] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.864492] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.874518] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.884543] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.894569] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.904596] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.914621] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.924646] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.980 [2024-05-15 13:03:29.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.934977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.934990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.980 [2024-05-15 13:03:29.935395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.980 [2024-05-15 13:03:29.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:01.981 [2024-05-15 13:03:29.935724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.935981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.935991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf0ef 00:20:01.981 [2024-05-15 13:03:29.936131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.981 [2024-05-15 13:03:29.936142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.936980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.936989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.937000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.937009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.937020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.937029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.937040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf0ef 00:20:01.982 [2024-05-15 13:03:29.937050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.982 [2024-05-15 13:03:29.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.937088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.937109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.937130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.937152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.937173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf0ef 00:20:01.983 [2024-05-15 13:03:29.937182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32588 cdw0:da9b76e0 sqhd:8530 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.950221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:01.983 [2024-05-15 13:03:29.950238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:01.983 [2024-05-15 13:03:29.950247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29216 len:8 PRP1 0x0 PRP2 0x0 00:20:01.983 [2024-05-15 13:03:29.950257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.983 [2024-05-15 13:03:29.950314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@1 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@1 -- # kill -9 3650358 00:20:01.983 test/nvmf/target/device_removal.sh: line 1: kill: (3650358) - No such process 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1121 -- # trap - ERR 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1121 -- # print_backtrace 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1149 -- # [[ ehxBET =~ e ]] 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1151 -- # args=('--transport=rdma' 'test/nvmf/target/device_removal.sh' 'nvmf_device_removal' '--transport=rdma') 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1151 -- # local args 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1153 -- # xtrace_disable 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:20:01.983 ========== Backtrace start: ========== 00:20:01.983 00:20:01.983 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh:1121 -> run_test(["nvmf_device_removal"],["test/nvmf/target/device_removal.sh"],["--transport=rdma"]) 00:20:01.983 ... 00:20:01.983 1116 timing_enter $test_name 00:20:01.983 1117 echo "************************************" 00:20:01.983 1118 echo "START TEST $test_name" 00:20:01.983 1119 echo "************************************" 00:20:01.983 1120 xtrace_restore 00:20:01.983 1121 time "$@" 00:20:01.983 1122 xtrace_disable 00:20:01.983 1123 echo "************************************" 00:20:01.983 1124 echo "END TEST $test_name" 00:20:01.983 1125 echo "************************************" 00:20:01.983 1126 timing_exit $test_name 00:20:01.983 ... 00:20:01.983 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh:79 -> main(["--transport=rdma"]) 00:20:01.983 ... 00:20:01.983 74 TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.983 75 if ((${#TCP_INTERFACE_LIST[@]} > 0)); then 00:20:01.983 76 run_test "nvmf_perf_adq" $rootdir/test/nvmf/target/perf_adq.sh "${TEST_ARGS[@]}" 00:20:01.983 77 fi 00:20:01.983 78 elif [[ $SPDK_TEST_NVMF_TRANSPORT == "rdma" ]]; then 00:20:01.983 => 79 run_test "nvmf_device_removal" test/nvmf/target/device_removal.sh "${TEST_ARGS[@]}" 00:20:01.983 80 run_test "nvmf_srq_overwhelm" "$rootdir/test/nvmf/target/srq_overwhelm.sh" "${TEST_ARGS[@]}" 00:20:01.983 81 fi 00:20:01.983 82 run_test "nvmf_shutdown" $rootdir/test/nvmf/target/shutdown.sh "${TEST_ARGS[@]}" 00:20:01.983 83 fi 00:20:01.983 84 00:20:01.983 ... 00:20:01.983 00:20:01.983 ========== Backtrace end ========== 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1190 -- # return 0 00:20:01.983 00:20:01.983 real 2m8.308s 00:20:01.983 user 4m46.156s 00:20:01.983 sys 0m11.212s 00:20:01.983 13:03:39 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1 -- # exit 1 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1121 -- # trap - ERR 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1121 -- # print_backtrace 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1149 -- # [[ ehxBET =~ e ]] 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1151 -- # args=('--transport=rdma' '/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_rdma' '/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf') 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1151 -- # local args 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1153 -- # xtrace_disable 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:01.983 ========== Backtrace start: ========== 00:20:01.983 00:20:01.983 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh:1121 -> run_test(["nvmf_rdma"],["/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=rdma"]) 00:20:01.983 ... 00:20:01.983 1116 timing_enter $test_name 00:20:01.983 1117 echo "************************************" 00:20:01.983 1118 echo "START TEST $test_name" 00:20:01.983 1119 echo "************************************" 00:20:01.983 1120 xtrace_restore 00:20:01.983 1121 time "$@" 00:20:01.983 1122 xtrace_disable 00:20:01.983 1123 echo "************************************" 00:20:01.983 1124 echo "END TEST $test_name" 00:20:01.983 1125 echo "************************************" 00:20:01.983 1126 timing_exit $test_name 00:20:01.983 ... 00:20:01.983 in /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh:280 -> main(["/var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf"]) 00:20:01.983 ... 00:20:01.983 275 if [ $SPDK_TEST_NVMF -eq 1 ]; then 00:20:01.983 276 export NET_TYPE 00:20:01.983 277 # The NVMe-oF run test cases are split out like this so that the parser that compiles the 00:20:01.983 278 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:20:01.983 279 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:20:01.983 => 280 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:01.983 281 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:01.983 282 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:20:01.983 283 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:01.983 284 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:20:01.983 285 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:01.983 ... 00:20:01.983 00:20:01.983 ========== Backtrace end ========== 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1190 -- # return 0 00:20:01.983 00:20:01.983 real 12m18.491s 00:20:01.983 user 29m48.398s 00:20:01.983 sys 3m10.870s 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1 -- # autotest_cleanup 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1388 -- # local autotest_es=1 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@1389 -- # xtrace_disable 00:20:01.983 13:03:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:11.971 ##### CORE BT bdevperf_3650358.core.bt.txt ##### 00:20:11.971 00:20:11.971 gdb: warning: Couldn't determine a path for the index cache directory. 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_0 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_1 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_2 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_3 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_4 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_5 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_6 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_7 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_8 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_9 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_10 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_11 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_12 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_13 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_14 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_15 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_16 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_17 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_18 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_19 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_20 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_21 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_22 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_23 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_24 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_25 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_26 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_27 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_28 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_29 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_30 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_31 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_32 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_33 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_34 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_35 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_36 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_37 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_38 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_39 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_40 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_41 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_42 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_43 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_44 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_45 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_46 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_47 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_48 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_49 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_50 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_51 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_52 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_53 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_54 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_55 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_56 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_57 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_58 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_59 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_60 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_61 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_62 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_63 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_64 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_65 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_66 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_67 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_68 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.971 warning: Can't open file /dev/hugepages/spdk_pid3650358map_69 (deleted) during file-backed mapping note processing 00:20:11.971 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_70 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_71 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_72 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_73 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_74 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_75 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_76 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_77 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_78 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_79 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_80 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_81 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_82 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_83 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_84 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_85 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_86 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_87 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_88 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_89 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_90 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_91 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_92 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_93 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_94 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_95 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_96 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_97 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_98 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_99 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_100 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_101 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_102 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_103 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_104 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_105 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_106 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_107 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_108 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_109 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_110 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_111 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_112 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_113 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_114 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_115 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_116 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_117 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_118 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_119 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_120 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_121 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_122 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_123 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_124 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_125 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_126 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_127 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_128 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_129 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_130 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_131 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_132 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_133 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_134 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_135 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_136 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_137 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_138 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_139 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_140 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_141 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_142 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_143 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_144 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.972 warning: Can't open file /dev/hugepages/spdk_pid3650358map_145 (deleted) during file-backed mapping note processing 00:20:11.972 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_146 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_147 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_148 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_149 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_150 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_151 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_152 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_153 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_154 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_155 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_156 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_157 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_158 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_159 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_160 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_161 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_162 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_163 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_164 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_165 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_166 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_167 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_168 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_169 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_170 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_171 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_172 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_173 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_174 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_175 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_176 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_177 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_178 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_179 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_180 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_181 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_182 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_183 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_184 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_185 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_186 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_187 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_188 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_189 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_190 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_191 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_192 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_193 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_194 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_195 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_196 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_197 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_198 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_199 (deleted) during file-backed mapping note processing 00:20:11.973 00:20:11.973 warning: Can't open file /dev/hugepages/spdk_pid3650358map_200 (deleted) during file-backed mapping note processing 00:20:11.973 [New LWP 3650358] 00:20:11.973 [New LWP 3650360] 00:20:11.973 [Thread debugging using libthread_db enabled] 00:20:11.973 Using host libthread_db library "/usr/lib64/libthread_db.so.1". 00:20:11.973 Core was generated by `/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z'. 00:20:11.973 Program terminated with signal SIGSEGV, Segmentation fault. 00:20:11.973 #0 0x00007f4c17ec4ecc in _int_malloc () from /usr/lib64/libc.so.6 00:20:11.973 [Current thread is 1 (Thread 0x7f4c17316a00 (LWP 3650358))] 00:20:11.973 00:20:11.973 Thread 2 (Thread 0x7f4c172006c0 (LWP 3650360)): 00:20:11.973 #0 0x00007f4c17f3bc62 in epoll_wait () from /usr/lib64/libc.so.6 00:20:11.973 No symbol table info available. 00:20:11.973 #1 0x00007f4c18cf7416 in eal_intr_handle_interrupts (pfd=6, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:20:11.973 events = {{events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}}} 00:20:11.973 nfds = 0 00:20:11.973 #2 0x00007f4c18cf7649 in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:20:11.973 pipe_event = {events = 3, data = {ptr = 0x4, fd = 4, u32 = 4, u64 = 4}} 00:20:11.973 src = 0x0 00:20:11.973 numfds = 1 00:20:11.973 pfd = 6 00:20:11.973 __func__ = "eal_intr_thread_main" 00:20:11.973 #3 0x00007f4c18cd642b in control_thread_start (arg=0x11f6790) at ../lib/eal/common/eal_common_thread.c:282 00:20:11.973 params = 0x11f6790 00:20:11.973 start_arg = 0x0 00:20:11.973 start_routine = 0x7f4c18cf747b 00:20:11.973 #4 0x00007f4c18ceea20 in thread_start_wrapper (arg=0x7ffcda9b76f0) at ../lib/eal/unix/rte_thread.c:112 00:20:11.973 ctx = 0x7ffcda9b76f0 00:20:11.973 thread_func = 0x7f4c18cd63dc 00:20:11.973 thread_args = 0x11f6790 00:20:11.973 ret = 0 00:20:11.973 #5 0x00007f4c17eb5947 in start_thread () from /usr/lib64/libc.so.6 00:20:11.973 No symbol table info available. 00:20:11.973 #6 0x00007f4c17f3b860 in clone3 () from /usr/lib64/libc.so.6 00:20:11.973 No symbol table info available. 00:20:11.973 00:20:11.973 Thread 1 (Thread 0x7f4c17316a00 (LWP 3650358)): 00:20:11.973 #0 0x00007f4c17ec4ecc in _int_malloc () from /usr/lib64/libc.so.6 00:20:11.973 No symbol table info available. 00:20:11.973 #1 0x00007f4c17ec6b7e in calloc () from /usr/lib64/libc.so.6 00:20:11.973 No symbol table info available. 00:20:11.973 #2 0x00007f4c18fb4fb0 in poller_register (fn=0x7f4c19fda363 , arg=0x1327020, period_microseconds=10000, name=0x7f4c1a042431 "bdev_nvme_poll_adminq") at thread.c:1663 00:20:11.973 thread = 0x125e440 00:20:11.973 poller = 0x7f4c19fda363 00:20:11.973 __PRETTY_FUNCTION__ = "poller_register" 00:20:11.973 __func__ = "poller_register" 00:20:11.973 #3 0x00007f4c18fb5b93 in spdk_poller_register_named (fn=0x7f4c19fda363 , arg=0x1327020, period_microseconds=10000, name=0x7f4c1a042431 "bdev_nvme_poll_adminq") at thread.c:1742 00:20:11.973 No locals. 00:20:11.973 #4 0x00007f4c19fda2e5 in bdev_nvme_change_adminq_poll_period (nvme_ctrlr=0x1327020, new_period_us=10000) at bdev_nvme.c:1667 00:20:11.973 No locals. 00:20:11.973 #5 0x00007f4c19fda507 in bdev_nvme_poll_adminq (arg=0x1327020) at bdev_nvme.c:1686 00:20:11.973 rc = -6 00:20:11.973 nvme_ctrlr = 0x1327020 00:20:11.973 disconnected_cb = 0x7f4c19fdf897 00:20:11.973 __PRETTY_FUNCTION__ = "bdev_nvme_poll_adminq" 00:20:11.973 #6 0x00007f4c18faef3e in thread_execute_poller (thread=0x125e440, poller=0x14c7110) at thread.c:959 00:20:11.973 rc = 0 00:20:11.973 __PRETTY_FUNCTION__ = "thread_execute_poller" 00:20:11.973 __func__ = "thread_execute_poller" 00:20:11.973 #7 0x00007f4c18fb0cc4 in thread_poll (thread=0x125e440, max_msgs=0, now=12498457906573682) at thread.c:1085 00:20:11.973 poller_rc = 0 00:20:11.974 msg_count = 1 00:20:11.974 poller = 0x14c7110 00:20:11.974 tmp = 0x0 00:20:11.974 critical_msg = 0x0 00:20:11.974 rc = 1 00:20:11.974 #8 0x00007f4c18fb1963 in spdk_thread_poll (thread=0x125e440, max_msgs=0, now=12498457906573682) at thread.c:1173 00:20:11.974 orig_thread = 0x0 00:20:11.974 rc = 0 00:20:11.974 #9 0x00007f4c1937f356 in _reactor_run (reactor=0x125de00) at reactor.c:914 00:20:11.974 thread = 0x125e440 00:20:11.974 lw_thread = 0x125e788 00:20:11.974 tmp = 0x1329d78 00:20:11.974 now = 12498457906573682 00:20:11.974 rc = 0 00:20:11.974 #10 0x00007f4c1937f9a6 in reactor_run (arg=0x125de00) at reactor.c:952 00:20:11.974 reactor = 0x125de00 00:20:11.974 thread = 0x7f4c18cc692d 00:20:11.974 lw_thread = 0x7f18fb2490 00:20:11.974 tmp = 0x7ffcda9b7a90 00:20:11.974 thread_name = "reactor_2\000\000\000\200\000\000\000\300z\233\332\374\177\000\000(\210F\031\002\000\000" 00:20:11.974 last_sched = 0 00:20:11.974 __func__ = "reactor_run" 00:20:11.974 #11 0x00007f4c193804b3 in spdk_reactors_start () at reactor.c:1068 00:20:11.974 reactor = 0x125de00 00:20:11.974 i = 4294967295 00:20:11.974 current_core = 2 00:20:11.974 rc = 0 00:20:11.974 __func__ = "spdk_reactors_start" 00:20:11.974 __PRETTY_FUNCTION__ = "spdk_reactors_start" 00:20:11.974 #12 0x00007f4c1937496b in spdk_app_start (opts_user=0x7ffcda9b7e30, start_fn=0x41d2b3 , arg1=0x0) at app.c:980 00:20:11.974 rc = 0 00:20:11.974 tty = 0x0 00:20:11.974 tmp_cpumask = {str = '\000' , cpus = "\004", '\000' } 00:20:11.974 g_env_was_setup = false 00:20:11.974 opts_local = {name = 0x44bc83 "bdevperf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7ffcda9b8f67 "/var/tmp/bdevperf.sock", reactor_mask = 0x7ffcda9b8f5d "0x4", tpoint_group_mask = 0x0, shm_id = -1, reserved52 = "\000\000\000", shutdown_cb = 0x41dc9d , enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_INFO, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 252, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 262143, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0} 00:20:11.974 opts = 0x7ffcda9b7b50 00:20:11.974 i = 128 00:20:11.974 core = 4294967295 00:20:11.974 __func__ = "spdk_app_start" 00:20:11.974 #13 0x000000000041f164 in main (argc=14, argv=0x7ffcda9b8068) at bdevperf.c:2900 00:20:11.974 opts = {name = 0x44bc83 "bdevperf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7ffcda9b8f67 "/var/tmp/bdevperf.sock", reactor_mask = 0x7ffcda9b8f5d "0x4", tpoint_group_mask = 0x0, shm_id = -1, reserved52 = "\000\000\000", shutdown_cb = 0x41dc9d , enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_INFO, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 252, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 0, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0} 00:20:11.974 rc = 1 00:20:11.974 00:20:11.974 -- 00:20:16.248 3650139 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt is still up, killing 00:20:16.248 INFO: APP EXITING 00:20:16.248 INFO: killing all VMs 00:20:16.248 INFO: killing vhost app 00:20:16.248 INFO: EXIT DONE 00:20:18.785 Waiting for block devices as requested 00:20:18.785 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:20:18.785 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:18.785 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:19.044 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:19.044 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:19.044 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:19.044 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:19.304 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:19.304 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:19.304 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:19.563 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:19.563 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:19.563 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:19.828 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:19.828 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:19.828 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:20.089 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:26.657 Cleaning 00:20:26.657 Removing: /var/run/dpdk/spdk0/config 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:20:26.657 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:26.657 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:26.657 Removing: /var/run/dpdk/spdk0/mp_socket 00:20:26.657 Removing: /var/run/dpdk/spdk1/config 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:20:26.657 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:26.657 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:26.657 Removing: /var/run/dpdk/spdk2/config 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:20:26.657 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:26.657 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:26.657 Removing: /var/run/dpdk/spdk3/config 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:20:26.657 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:26.657 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:26.657 Removing: /var/run/dpdk/spdk4/config 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:20:26.657 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:26.657 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:26.657 Removing: /dev/shm/bdevperf_trace.pid3568413 00:20:26.657 Removing: /dev/shm/bdevperf_trace.pid3650358 00:20:26.657 Removing: /dev/shm/nvmf_trace.0 00:20:26.657 Removing: /dev/shm/spdk_tgt_trace.pid3473492 00:20:26.657 Removing: /var/tmp/spdk_cpu_lock_000 00:20:26.657 Removing: /var/tmp/spdk_cpu_lock_001 00:20:26.657 Removing: /var/tmp/spdk_cpu_lock_002 00:20:26.657 Removing: /var/run/dpdk/spdk0 00:20:26.657 Removing: /var/run/dpdk/spdk1 00:20:26.657 Removing: /var/run/dpdk/spdk2 00:20:26.657 Removing: /var/run/dpdk/spdk3 00:20:26.657 Removing: /var/run/dpdk/spdk4 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3470167 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3471713 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3473492 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3474038 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3474791 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3474982 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3475769 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3475864 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3476077 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3480861 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3482610 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3482842 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3483141 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3483503 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3483749 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3483957 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3484161 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3484387 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3485166 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3487580 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3487803 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3488056 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3488205 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3488606 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3488790 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489195 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489370 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489586 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489771 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489979 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3489996 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3490471 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3490672 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3490922 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3491141 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3491327 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3491399 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3491607 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3491807 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3492013 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3492248 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3492495 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3492738 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3492977 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3493190 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3493396 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3493700 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3493924 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3494266 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3494692 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3494905 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3495109 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3495319 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3495585 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3495848 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3496092 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3496295 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3496367 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3496781 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3500106 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3535655 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3539206 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3548069 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3552464 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3555564 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3556120 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3568413 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3568743 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3572229 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3577012 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3579247 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3588248 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3608856 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3611971 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3637539 00:20:26.657 Removing: /var/run/dpdk/spdk_pid3650358 00:20:26.657 Clean 00:22:33.103 13:06:05 nvmf_rdma -- common/autotest_common.sh@1447 -- # return 1 00:22:33.103 13:06:05 nvmf_rdma -- common/autotest_common.sh@1 -- # : 00:22:33.103 13:06:05 nvmf_rdma -- common/autotest_common.sh@1 -- # exit 1 00:22:33.115 [Pipeline] } 00:22:33.134 [Pipeline] // stage 00:22:33.141 [Pipeline] } 00:22:33.162 [Pipeline] // timeout 00:22:33.168 [Pipeline] } 00:22:33.171 ERROR: script returned exit code 1 00:22:33.188 [Pipeline] // catchError 00:22:33.193 [Pipeline] } 00:22:33.209 [Pipeline] // wrap 00:22:33.215 [Pipeline] } 00:22:33.230 [Pipeline] // catchError 00:22:33.238 [Pipeline] stage 00:22:33.240 [Pipeline] { (Epilogue) 00:22:33.254 [Pipeline] catchError 00:22:33.255 [Pipeline] { 00:22:33.269 [Pipeline] echo 00:22:33.270 Cleanup processes 00:22:33.275 [Pipeline] sh 00:22:33.560 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:33.560 3679994 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:33.575 [Pipeline] sh 00:22:33.859 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:33.859 ++ grep -v 'sudo pgrep' 00:22:33.859 ++ awk '{print $1}' 00:22:33.859 + sudo kill -9 00:22:33.859 + true 00:22:33.871 [Pipeline] sh 00:22:34.155 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:37.459 [Pipeline] sh 00:22:37.740 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:37.740 Artifacts sizes are good 00:22:37.750 [Pipeline] archiveArtifacts 00:22:37.754 Archiving artifacts 00:22:38.679 [Pipeline] sh 00:22:39.039 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:22:39.052 [Pipeline] cleanWs 00:22:39.061 [WS-CLEANUP] Deleting project workspace... 00:22:39.061 [WS-CLEANUP] Deferred wipeout is used... 00:22:39.068 [WS-CLEANUP] done 00:22:39.069 [Pipeline] } 00:22:39.089 [Pipeline] // catchError 00:22:39.100 [Pipeline] echo 00:22:39.101 Tests finished with errors. Please check the logs for more info. 00:22:39.105 [Pipeline] echo 00:22:39.106 Execution node will be rebooted. 00:22:39.122 [Pipeline] build 00:22:39.124 Scheduling project: reset-job 00:22:39.136 [Pipeline] sh 00:22:39.441 + logger -p user.info -t JENKINS-CI 00:22:39.449 [Pipeline] } 00:22:39.464 [Pipeline] // stage 00:22:39.469 [Pipeline] } 00:22:39.486 [Pipeline] // node 00:22:39.491 [Pipeline] End of Pipeline 00:22:39.517 Finished: FAILURE